00:00:00.001 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 599 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3264 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.040 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.040 The recommended git tool is: git 00:00:00.040 using credential 00000000-0000-0000-0000-000000000002 00:00:00.042 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.058 Fetching changes from the remote Git repository 00:00:00.087 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.110 Using shallow fetch with depth 1 00:00:00.110 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.110 > git --version # timeout=10 00:00:00.133 > git --version # 'git version 2.39.2' 00:00:00.133 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.160 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.160 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.887 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.903 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.917 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:03.917 > git config core.sparsecheckout # timeout=10 00:00:03.928 > git read-tree -mu HEAD # timeout=10 00:00:03.946 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:03.967 Commit message: "inventory: add WCP3 to free inventory" 00:00:03.967 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:04.071 [Pipeline] Start of Pipeline 00:00:04.086 [Pipeline] library 00:00:04.087 Loading library shm_lib@master 00:00:04.087 Library shm_lib@master is cached. Copying from home. 00:00:04.102 [Pipeline] node 00:00:04.111 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:04.113 [Pipeline] { 00:00:04.125 [Pipeline] catchError 00:00:04.126 [Pipeline] { 00:00:04.140 [Pipeline] wrap 00:00:04.152 [Pipeline] { 00:00:04.161 [Pipeline] stage 00:00:04.163 [Pipeline] { (Prologue) 00:00:04.182 [Pipeline] echo 00:00:04.183 Node: VM-host-SM9 00:00:04.188 [Pipeline] cleanWs 00:00:04.200 [WS-CLEANUP] Deleting project workspace... 00:00:04.200 [WS-CLEANUP] Deferred wipeout is used... 00:00:04.205 [WS-CLEANUP] done 00:00:04.403 [Pipeline] setCustomBuildProperty 00:00:04.466 [Pipeline] httpRequest 00:00:04.487 [Pipeline] echo 00:00:04.489 Sorcerer 10.211.164.101 is alive 00:00:04.496 [Pipeline] httpRequest 00:00:04.499 HttpMethod: GET 00:00:04.500 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.500 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:04.506 Response Code: HTTP/1.1 200 OK 00:00:04.507 Success: Status code 200 is in the accepted range: 200,404 00:00:04.507 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.745 [Pipeline] sh 00:00:08.027 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:08.042 [Pipeline] httpRequest 00:00:08.061 [Pipeline] echo 00:00:08.063 Sorcerer 10.211.164.101 is alive 00:00:08.071 [Pipeline] httpRequest 00:00:08.075 HttpMethod: GET 00:00:08.075 URL: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:08.076 Sending request to url: http://10.211.164.101/packages/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:00:08.097 Response Code: HTTP/1.1 200 OK 00:00:08.098 Success: Status code 200 is in the accepted range: 200,404 00:00:08.099 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:17.700 [Pipeline] sh 00:01:17.979 + tar --no-same-owner -xf spdk_4b94202c659be49093c32ec1d2d75efdacf00691.tar.gz 00:01:20.525 [Pipeline] sh 00:01:20.802 + git -C spdk log --oneline -n5 00:01:20.802 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:01:20.802 507e9ba07 nvme: add lock_depth for ctrlr_lock 00:01:20.802 62fda7b5f nvme: check pthread_mutex_destroy() return value 00:01:20.802 e03c164a1 nvme: add nvme_ctrlr_lock 00:01:20.802 d61f89a86 nvme/cuse: Add ctrlr_lock for cuse register and unregister 00:01:20.823 [Pipeline] withCredentials 00:01:20.833 > git --version # timeout=10 00:01:20.845 > git --version # 'git version 2.39.2' 00:01:20.860 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:20.862 [Pipeline] { 00:01:20.871 [Pipeline] retry 00:01:20.873 [Pipeline] { 00:01:20.890 [Pipeline] sh 00:01:21.168 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:26.453 [Pipeline] } 00:01:26.479 [Pipeline] // retry 00:01:26.484 [Pipeline] } 00:01:26.504 [Pipeline] // withCredentials 00:01:26.514 [Pipeline] httpRequest 00:01:26.539 [Pipeline] echo 00:01:26.541 Sorcerer 10.211.164.101 is alive 00:01:26.551 [Pipeline] httpRequest 00:01:26.556 HttpMethod: GET 00:01:26.557 URL: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:26.557 Sending request to url: http://10.211.164.101/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:26.561 Response Code: HTTP/1.1 200 OK 00:01:26.561 Success: Status code 200 is in the accepted range: 200,404 00:01:26.562 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:32.538 [Pipeline] sh 00:01:32.813 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:34.203 [Pipeline] sh 00:01:34.490 + git -C dpdk log --oneline -n5 00:01:34.490 caf0f5d395 version: 22.11.4 00:01:34.491 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:34.491 dc9c799c7d vhost: fix missing spinlock unlock 00:01:34.491 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:34.491 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:34.511 [Pipeline] writeFile 00:01:34.526 [Pipeline] sh 00:01:34.821 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:34.832 [Pipeline] sh 00:01:35.111 + cat autorun-spdk.conf 00:01:35.111 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.111 SPDK_TEST_NVMF=1 00:01:35.111 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.111 SPDK_TEST_URING=1 00:01:35.111 SPDK_TEST_USDT=1 00:01:35.111 SPDK_RUN_UBSAN=1 00:01:35.111 NET_TYPE=virt 00:01:35.111 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:35.111 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:35.111 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:35.118 RUN_NIGHTLY=1 00:01:35.120 [Pipeline] } 00:01:35.136 [Pipeline] // stage 00:01:35.152 [Pipeline] stage 00:01:35.155 [Pipeline] { (Run VM) 00:01:35.169 [Pipeline] sh 00:01:35.448 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:35.448 + echo 'Start stage prepare_nvme.sh' 00:01:35.449 Start stage prepare_nvme.sh 00:01:35.449 + [[ -n 5 ]] 00:01:35.449 + disk_prefix=ex5 00:01:35.449 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:35.449 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:35.449 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:35.449 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:35.449 ++ SPDK_TEST_NVMF=1 00:01:35.449 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:35.449 ++ SPDK_TEST_URING=1 00:01:35.449 ++ SPDK_TEST_USDT=1 00:01:35.449 ++ SPDK_RUN_UBSAN=1 00:01:35.449 ++ NET_TYPE=virt 00:01:35.449 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:35.449 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:01:35.449 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:35.449 ++ RUN_NIGHTLY=1 00:01:35.449 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:35.449 + nvme_files=() 00:01:35.449 + declare -A nvme_files 00:01:35.449 + backend_dir=/var/lib/libvirt/images/backends 00:01:35.449 + nvme_files['nvme.img']=5G 00:01:35.449 + nvme_files['nvme-cmb.img']=5G 00:01:35.449 + nvme_files['nvme-multi0.img']=4G 00:01:35.449 + nvme_files['nvme-multi1.img']=4G 00:01:35.449 + nvme_files['nvme-multi2.img']=4G 00:01:35.449 + nvme_files['nvme-openstack.img']=8G 00:01:35.449 + nvme_files['nvme-zns.img']=5G 00:01:35.449 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:35.449 + (( SPDK_TEST_FTL == 1 )) 00:01:35.449 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:35.449 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:35.449 + for nvme in "${!nvme_files[@]}" 00:01:35.449 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:35.449 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:35.449 + for nvme in "${!nvme_files[@]}" 00:01:35.449 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:35.449 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:35.449 + for nvme in "${!nvme_files[@]}" 00:01:35.449 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:35.449 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:35.449 + for nvme in "${!nvme_files[@]}" 00:01:35.449 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:35.449 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:35.449 + for nvme in "${!nvme_files[@]}" 00:01:35.449 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:35.449 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:35.449 + for nvme in "${!nvme_files[@]}" 00:01:35.449 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:35.707 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:35.707 + for nvme in "${!nvme_files[@]}" 00:01:35.708 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:35.708 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:35.708 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:35.708 + echo 'End stage prepare_nvme.sh' 00:01:35.708 End stage prepare_nvme.sh 00:01:35.719 [Pipeline] sh 00:01:36.000 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:36.000 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora38 00:01:36.259 00:01:36.259 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:36.259 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:36.259 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:36.259 HELP=0 00:01:36.259 DRY_RUN=0 00:01:36.259 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:36.259 NVME_DISKS_TYPE=nvme,nvme, 00:01:36.259 NVME_AUTO_CREATE=0 00:01:36.259 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:36.259 NVME_CMB=,, 00:01:36.259 NVME_PMR=,, 00:01:36.259 NVME_ZNS=,, 00:01:36.259 NVME_MS=,, 00:01:36.259 NVME_FDP=,, 00:01:36.259 SPDK_VAGRANT_DISTRO=fedora38 00:01:36.259 SPDK_VAGRANT_VMCPU=10 00:01:36.259 SPDK_VAGRANT_VMRAM=12288 00:01:36.259 SPDK_VAGRANT_PROVIDER=libvirt 00:01:36.259 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:36.259 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:36.259 SPDK_OPENSTACK_NETWORK=0 00:01:36.259 VAGRANT_PACKAGE_BOX=0 00:01:36.260 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:36.260 FORCE_DISTRO=true 00:01:36.260 VAGRANT_BOX_VERSION= 00:01:36.260 EXTRA_VAGRANTFILES= 00:01:36.260 NIC_MODEL=e1000 00:01:36.260 00:01:36.260 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt' 00:01:36.260 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:38.803 Bringing machine 'default' up with 'libvirt' provider... 00:01:39.738 ==> default: Creating image (snapshot of base box volume). 00:01:39.738 ==> default: Creating domain with the following settings... 00:01:39.738 ==> default: -- Name: fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720856865_4c58c7a0203e1a09c9b2 00:01:39.738 ==> default: -- Domain type: kvm 00:01:39.739 ==> default: -- Cpus: 10 00:01:39.739 ==> default: -- Feature: acpi 00:01:39.739 ==> default: -- Feature: apic 00:01:39.739 ==> default: -- Feature: pae 00:01:39.739 ==> default: -- Memory: 12288M 00:01:39.739 ==> default: -- Memory Backing: hugepages: 00:01:39.739 ==> default: -- Management MAC: 00:01:39.739 ==> default: -- Loader: 00:01:39.739 ==> default: -- Nvram: 00:01:39.739 ==> default: -- Base box: spdk/fedora38 00:01:39.739 ==> default: -- Storage pool: default 00:01:39.739 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1716830599-074-updated-1705279005_default_1720856865_4c58c7a0203e1a09c9b2.img (20G) 00:01:39.739 ==> default: -- Volume Cache: default 00:01:39.739 ==> default: -- Kernel: 00:01:39.739 ==> default: -- Initrd: 00:01:39.739 ==> default: -- Graphics Type: vnc 00:01:39.739 ==> default: -- Graphics Port: -1 00:01:39.739 ==> default: -- Graphics IP: 127.0.0.1 00:01:39.739 ==> default: -- Graphics Password: Not defined 00:01:39.739 ==> default: -- Video Type: cirrus 00:01:39.739 ==> default: -- Video VRAM: 9216 00:01:39.739 ==> default: -- Sound Type: 00:01:39.739 ==> default: -- Keymap: en-us 00:01:39.739 ==> default: -- TPM Path: 00:01:39.739 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:39.739 ==> default: -- Command line args: 00:01:39.739 ==> default: -> value=-device, 00:01:39.739 ==> default: -> value=nvme,id=nvme-0,serial=12340, 00:01:39.739 ==> default: -> value=-drive, 00:01:39.739 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:39.739 ==> default: -> value=-device, 00:01:39.739 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.739 ==> default: -> value=-device, 00:01:39.739 ==> default: -> value=nvme,id=nvme-1,serial=12341, 00:01:39.739 ==> default: -> value=-drive, 00:01:39.739 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:39.739 ==> default: -> value=-device, 00:01:39.739 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.739 ==> default: -> value=-drive, 00:01:39.739 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:39.739 ==> default: -> value=-device, 00:01:39.739 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.739 ==> default: -> value=-drive, 00:01:39.739 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:39.739 ==> default: -> value=-device, 00:01:39.739 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:39.739 ==> default: Creating shared folders metadata... 00:01:39.739 ==> default: Starting domain. 00:01:41.117 ==> default: Waiting for domain to get an IP address... 00:01:59.203 ==> default: Waiting for SSH to become available... 00:02:00.582 ==> default: Configuring and enabling network interfaces... 00:02:04.776 default: SSH address: 192.168.121.107:22 00:02:04.776 default: SSH username: vagrant 00:02:04.776 default: SSH auth method: private key 00:02:07.305 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:13.868 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/dpdk/ => /home/vagrant/spdk_repo/dpdk 00:02:20.428 ==> default: Mounting SSHFS shared folder... 00:02:21.389 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:02:21.389 ==> default: Checking Mount.. 00:02:22.767 ==> default: Folder Successfully Mounted! 00:02:22.767 ==> default: Running provisioner: file... 00:02:23.334 default: ~/.gitconfig => .gitconfig 00:02:23.902 00:02:23.902 SUCCESS! 00:02:23.902 00:02:23.902 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:02:23.902 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:23.902 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:02:23.902 00:02:23.911 [Pipeline] } 00:02:23.930 [Pipeline] // stage 00:02:23.939 [Pipeline] dir 00:02:23.939 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora38-libvirt 00:02:23.941 [Pipeline] { 00:02:23.955 [Pipeline] catchError 00:02:23.957 [Pipeline] { 00:02:23.971 [Pipeline] sh 00:02:24.252 + vagrant ssh-config --host vagrant 00:02:24.252 + sed -ne /^Host/,$p 00:02:24.252 + tee ssh_conf 00:02:27.541 Host vagrant 00:02:27.541 HostName 192.168.121.107 00:02:27.541 User vagrant 00:02:27.541 Port 22 00:02:27.541 UserKnownHostsFile /dev/null 00:02:27.541 StrictHostKeyChecking no 00:02:27.541 PasswordAuthentication no 00:02:27.541 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1716830599-074-updated-1705279005/libvirt/fedora38 00:02:27.541 IdentitiesOnly yes 00:02:27.541 LogLevel FATAL 00:02:27.541 ForwardAgent yes 00:02:27.541 ForwardX11 yes 00:02:27.541 00:02:27.554 [Pipeline] withEnv 00:02:27.556 [Pipeline] { 00:02:27.570 [Pipeline] sh 00:02:27.848 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:27.848 source /etc/os-release 00:02:27.848 [[ -e /image.version ]] && img=$(< /image.version) 00:02:27.848 # Minimal, systemd-like check. 00:02:27.848 if [[ -e /.dockerenv ]]; then 00:02:27.848 # Clear garbage from the node's name: 00:02:27.848 # agt-er_autotest_547-896 -> autotest_547-896 00:02:27.848 # $HOSTNAME is the actual container id 00:02:27.848 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:27.848 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:27.848 # We can assume this is a mount from a host where container is running, 00:02:27.848 # so fetch its hostname to easily identify the target swarm worker. 00:02:27.848 container="$(< /etc/hostname) ($agent)" 00:02:27.848 else 00:02:27.848 # Fallback 00:02:27.848 container=$agent 00:02:27.848 fi 00:02:27.848 fi 00:02:27.848 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:27.848 00:02:28.122 [Pipeline] } 00:02:28.147 [Pipeline] // withEnv 00:02:28.156 [Pipeline] setCustomBuildProperty 00:02:28.174 [Pipeline] stage 00:02:28.176 [Pipeline] { (Tests) 00:02:28.195 [Pipeline] sh 00:02:28.476 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:28.749 [Pipeline] sh 00:02:29.030 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:29.340 [Pipeline] timeout 00:02:29.341 Timeout set to expire in 30 min 00:02:29.343 [Pipeline] { 00:02:29.363 [Pipeline] sh 00:02:29.642 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:30.212 HEAD is now at 4b94202c6 lib/event: Bug fix for framework_set_scheduler 00:02:30.225 [Pipeline] sh 00:02:30.505 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:30.779 [Pipeline] sh 00:02:31.059 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:31.338 [Pipeline] sh 00:02:31.618 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:31.877 ++ readlink -f spdk_repo 00:02:31.877 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:31.877 + [[ -n /home/vagrant/spdk_repo ]] 00:02:31.877 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:31.877 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:31.877 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:31.877 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:31.877 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:31.877 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:31.877 + cd /home/vagrant/spdk_repo 00:02:31.877 + source /etc/os-release 00:02:31.877 ++ NAME='Fedora Linux' 00:02:31.877 ++ VERSION='38 (Cloud Edition)' 00:02:31.877 ++ ID=fedora 00:02:31.877 ++ VERSION_ID=38 00:02:31.877 ++ VERSION_CODENAME= 00:02:31.877 ++ PLATFORM_ID=platform:f38 00:02:31.877 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:02:31.877 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:31.877 ++ LOGO=fedora-logo-icon 00:02:31.877 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:02:31.877 ++ HOME_URL=https://fedoraproject.org/ 00:02:31.877 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:02:31.877 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:31.877 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:31.877 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:31.877 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:02:31.877 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:31.877 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:02:31.877 ++ SUPPORT_END=2024-05-14 00:02:31.877 ++ VARIANT='Cloud Edition' 00:02:31.877 ++ VARIANT_ID=cloud 00:02:31.877 + uname -a 00:02:31.877 Linux fedora38-cloud-1716830599-074-updated-1705279005 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:02:31.877 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:31.877 Hugepages 00:02:31.877 node hugesize free / total 00:02:31.877 node0 1048576kB 0 / 0 00:02:31.877 node0 2048kB 0 / 0 00:02:31.877 00:02:31.877 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:31.877 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:31.877 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:31.877 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:31.877 + rm -f /tmp/spdk-ld-path 00:02:31.877 + source autorun-spdk.conf 00:02:31.877 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:31.877 ++ SPDK_TEST_NVMF=1 00:02:31.877 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:31.877 ++ SPDK_TEST_URING=1 00:02:31.877 ++ SPDK_TEST_USDT=1 00:02:31.877 ++ SPDK_RUN_UBSAN=1 00:02:31.877 ++ NET_TYPE=virt 00:02:31.877 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:31.877 ++ SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:31.877 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:31.877 ++ RUN_NIGHTLY=1 00:02:31.877 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:31.877 + [[ -n '' ]] 00:02:31.877 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:32.136 + for M in /var/spdk/build-*-manifest.txt 00:02:32.136 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:32.136 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:32.136 + for M in /var/spdk/build-*-manifest.txt 00:02:32.136 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:32.136 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:32.136 ++ uname 00:02:32.136 + [[ Linux == \L\i\n\u\x ]] 00:02:32.136 + sudo dmesg -T 00:02:32.136 + sudo dmesg --clear 00:02:32.136 + dmesg_pid=5868 00:02:32.136 + sudo dmesg -Tw 00:02:32.136 + [[ Fedora Linux == FreeBSD ]] 00:02:32.136 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:32.136 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:32.136 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:32.136 + [[ -x /usr/src/fio-static/fio ]] 00:02:32.136 + export FIO_BIN=/usr/src/fio-static/fio 00:02:32.136 + FIO_BIN=/usr/src/fio-static/fio 00:02:32.136 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:32.136 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:32.136 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:32.136 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:32.136 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:32.136 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:32.136 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:32.136 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:32.136 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:32.136 Test configuration: 00:02:32.136 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:32.136 SPDK_TEST_NVMF=1 00:02:32.136 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:32.136 SPDK_TEST_URING=1 00:02:32.136 SPDK_TEST_USDT=1 00:02:32.136 SPDK_RUN_UBSAN=1 00:02:32.136 NET_TYPE=virt 00:02:32.136 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:32.136 SPDK_RUN_EXTERNAL_DPDK=/home/vagrant/spdk_repo/dpdk/build 00:02:32.136 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:32.136 RUN_NIGHTLY=1 07:48:37 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:32.136 07:48:37 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:32.136 07:48:37 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:32.136 07:48:37 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:32.136 07:48:37 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.136 07:48:37 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.136 07:48:37 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.136 07:48:37 -- paths/export.sh@5 -- $ export PATH 00:02:32.136 07:48:37 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:32.136 07:48:37 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:32.136 07:48:37 -- common/autobuild_common.sh@435 -- $ date +%s 00:02:32.136 07:48:37 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720856917.XXXXXX 00:02:32.137 07:48:37 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720856917.OPLFwQ 00:02:32.137 07:48:37 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:02:32.137 07:48:37 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:02:32.137 07:48:37 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:32.137 07:48:37 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:02:32.137 07:48:37 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:32.137 07:48:37 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:32.137 07:48:37 -- common/autobuild_common.sh@451 -- $ get_config_params 00:02:32.137 07:48:37 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:02:32.137 07:48:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.137 07:48:37 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:02:32.137 07:48:37 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:32.137 07:48:37 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:32.137 07:48:37 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:32.137 07:48:37 -- spdk/autobuild.sh@16 -- $ date -u 00:02:32.137 Sat Jul 13 07:48:37 AM UTC 2024 00:02:32.137 07:48:37 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:32.137 LTS-59-g4b94202c6 00:02:32.137 07:48:37 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:32.137 07:48:37 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:32.137 07:48:37 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:32.137 07:48:37 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:02:32.137 07:48:37 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:32.137 07:48:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.137 ************************************ 00:02:32.137 START TEST ubsan 00:02:32.137 ************************************ 00:02:32.137 using ubsan 00:02:32.137 07:48:37 -- common/autotest_common.sh@1104 -- $ echo 'using ubsan' 00:02:32.137 00:02:32.137 real 0m0.000s 00:02:32.137 user 0m0.000s 00:02:32.137 sys 0m0.000s 00:02:32.137 07:48:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:32.137 07:48:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.137 ************************************ 00:02:32.137 END TEST ubsan 00:02:32.137 ************************************ 00:02:32.396 07:48:37 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:32.396 07:48:37 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:32.396 07:48:37 -- common/autobuild_common.sh@427 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:32.396 07:48:37 -- common/autotest_common.sh@1077 -- $ '[' 2 -le 1 ']' 00:02:32.396 07:48:37 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:02:32.396 07:48:37 -- common/autotest_common.sh@10 -- $ set +x 00:02:32.396 ************************************ 00:02:32.396 START TEST build_native_dpdk 00:02:32.396 ************************************ 00:02:32.396 07:48:37 -- common/autotest_common.sh@1104 -- $ _build_native_dpdk 00:02:32.396 07:48:37 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:32.396 07:48:37 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:32.396 07:48:37 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:32.396 07:48:37 -- common/autobuild_common.sh@51 -- $ local compiler 00:02:32.396 07:48:37 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:32.396 07:48:37 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:32.396 07:48:37 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:32.396 07:48:37 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:32.396 07:48:37 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:32.396 07:48:37 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:32.396 07:48:37 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:32.396 07:48:37 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:32.396 07:48:38 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:32.396 07:48:38 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:32.396 07:48:38 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/home/vagrant/spdk_repo/dpdk/build 00:02:32.396 07:48:38 -- common/autobuild_common.sh@71 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:02:32.396 07:48:38 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/home/vagrant/spdk_repo/dpdk 00:02:32.396 07:48:38 -- common/autobuild_common.sh@73 -- $ [[ ! -d /home/vagrant/spdk_repo/dpdk ]] 00:02:32.396 07:48:38 -- common/autobuild_common.sh@82 -- $ orgdir=/home/vagrant/spdk_repo/spdk 00:02:32.396 07:48:38 -- common/autobuild_common.sh@83 -- $ git -C /home/vagrant/spdk_repo/dpdk log --oneline -n 5 00:02:32.396 caf0f5d395 version: 22.11.4 00:02:32.396 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:32.396 dc9c799c7d vhost: fix missing spinlock unlock 00:02:32.396 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:32.396 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:32.396 07:48:38 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:32.396 07:48:38 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:32.396 07:48:38 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:32.396 07:48:38 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:32.396 07:48:38 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:32.396 07:48:38 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:32.396 07:48:38 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:32.396 07:48:38 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:32.396 07:48:38 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:32.396 07:48:38 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:02:32.396 07:48:38 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:02:32.396 07:48:38 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:32.396 07:48:38 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:02:32.396 07:48:38 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:02:32.396 07:48:38 -- common/autobuild_common.sh@167 -- $ cd /home/vagrant/spdk_repo/dpdk 00:02:32.396 07:48:38 -- common/autobuild_common.sh@168 -- $ uname -s 00:02:32.396 07:48:38 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:02:32.396 07:48:38 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:02:32.396 07:48:38 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:32.396 07:48:38 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:02:32.396 07:48:38 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:02:32.396 07:48:38 -- scripts/common.sh@335 -- $ IFS=.-: 00:02:32.396 07:48:38 -- scripts/common.sh@335 -- $ read -ra ver1 00:02:32.396 07:48:38 -- scripts/common.sh@336 -- $ IFS=.-: 00:02:32.396 07:48:38 -- scripts/common.sh@336 -- $ read -ra ver2 00:02:32.396 07:48:38 -- scripts/common.sh@337 -- $ local 'op=<' 00:02:32.396 07:48:38 -- scripts/common.sh@339 -- $ ver1_l=3 00:02:32.396 07:48:38 -- scripts/common.sh@340 -- $ ver2_l=3 00:02:32.396 07:48:38 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:02:32.396 07:48:38 -- scripts/common.sh@343 -- $ case "$op" in 00:02:32.396 07:48:38 -- scripts/common.sh@344 -- $ : 1 00:02:32.396 07:48:38 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:02:32.396 07:48:38 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:32.396 07:48:38 -- scripts/common.sh@364 -- $ decimal 22 00:02:32.397 07:48:38 -- scripts/common.sh@352 -- $ local d=22 00:02:32.397 07:48:38 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:32.397 07:48:38 -- scripts/common.sh@354 -- $ echo 22 00:02:32.397 07:48:38 -- scripts/common.sh@364 -- $ ver1[v]=22 00:02:32.397 07:48:38 -- scripts/common.sh@365 -- $ decimal 21 00:02:32.397 07:48:38 -- scripts/common.sh@352 -- $ local d=21 00:02:32.397 07:48:38 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:32.397 07:48:38 -- scripts/common.sh@354 -- $ echo 21 00:02:32.397 07:48:38 -- scripts/common.sh@365 -- $ ver2[v]=21 00:02:32.397 07:48:38 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:02:32.397 07:48:38 -- scripts/common.sh@366 -- $ return 1 00:02:32.397 07:48:38 -- common/autobuild_common.sh@173 -- $ patch -p1 00:02:32.397 patching file config/rte_config.h 00:02:32.397 Hunk #1 succeeded at 60 (offset 1 line). 00:02:32.397 07:48:38 -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:02:32.397 07:48:38 -- common/autobuild_common.sh@178 -- $ uname -s 00:02:32.397 07:48:38 -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:02:32.397 07:48:38 -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:02:32.397 07:48:38 -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/home/vagrant/spdk_repo/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:37.666 The Meson build system 00:02:37.666 Version: 1.3.1 00:02:37.666 Source dir: /home/vagrant/spdk_repo/dpdk 00:02:37.666 Build dir: /home/vagrant/spdk_repo/dpdk/build-tmp 00:02:37.666 Build type: native build 00:02:37.666 Program cat found: YES (/usr/bin/cat) 00:02:37.666 Project name: DPDK 00:02:37.666 Project version: 22.11.4 00:02:37.666 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:37.666 C linker for the host machine: gcc ld.bfd 2.39-16 00:02:37.666 Host machine cpu family: x86_64 00:02:37.666 Host machine cpu: x86_64 00:02:37.666 Message: ## Building in Developer Mode ## 00:02:37.666 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:37.666 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/check-symbols.sh) 00:02:37.666 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/dpdk/buildtools/options-ibverbs-static.sh) 00:02:37.666 Program objdump found: YES (/usr/bin/objdump) 00:02:37.666 Program python3 found: YES (/usr/bin/python3) 00:02:37.666 Program cat found: YES (/usr/bin/cat) 00:02:37.666 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:37.666 Checking for size of "void *" : 8 00:02:37.666 Checking for size of "void *" : 8 (cached) 00:02:37.666 Library m found: YES 00:02:37.666 Library numa found: YES 00:02:37.666 Has header "numaif.h" : YES 00:02:37.666 Library fdt found: NO 00:02:37.666 Library execinfo found: NO 00:02:37.666 Has header "execinfo.h" : YES 00:02:37.666 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:37.666 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:37.666 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:37.666 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:37.666 Run-time dependency openssl found: YES 3.0.9 00:02:37.666 Run-time dependency libpcap found: YES 1.10.4 00:02:37.666 Has header "pcap.h" with dependency libpcap: YES 00:02:37.666 Compiler for C supports arguments -Wcast-qual: YES 00:02:37.666 Compiler for C supports arguments -Wdeprecated: YES 00:02:37.666 Compiler for C supports arguments -Wformat: YES 00:02:37.666 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:37.666 Compiler for C supports arguments -Wformat-security: NO 00:02:37.666 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:37.666 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:37.666 Compiler for C supports arguments -Wnested-externs: YES 00:02:37.666 Compiler for C supports arguments -Wold-style-definition: YES 00:02:37.666 Compiler for C supports arguments -Wpointer-arith: YES 00:02:37.666 Compiler for C supports arguments -Wsign-compare: YES 00:02:37.666 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:37.666 Compiler for C supports arguments -Wundef: YES 00:02:37.666 Compiler for C supports arguments -Wwrite-strings: YES 00:02:37.666 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:37.666 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:37.666 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:37.666 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:37.666 Compiler for C supports arguments -mavx512f: YES 00:02:37.666 Checking if "AVX512 checking" compiles: YES 00:02:37.666 Fetching value of define "__SSE4_2__" : 1 00:02:37.666 Fetching value of define "__AES__" : 1 00:02:37.666 Fetching value of define "__AVX__" : 1 00:02:37.666 Fetching value of define "__AVX2__" : 1 00:02:37.666 Fetching value of define "__AVX512BW__" : (undefined) 00:02:37.666 Fetching value of define "__AVX512CD__" : (undefined) 00:02:37.666 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:37.666 Fetching value of define "__AVX512F__" : (undefined) 00:02:37.666 Fetching value of define "__AVX512VL__" : (undefined) 00:02:37.666 Fetching value of define "__PCLMUL__" : 1 00:02:37.666 Fetching value of define "__RDRND__" : 1 00:02:37.666 Fetching value of define "__RDSEED__" : 1 00:02:37.666 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:37.666 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:37.666 Message: lib/kvargs: Defining dependency "kvargs" 00:02:37.666 Message: lib/telemetry: Defining dependency "telemetry" 00:02:37.666 Checking for function "getentropy" : YES 00:02:37.666 Message: lib/eal: Defining dependency "eal" 00:02:37.666 Message: lib/ring: Defining dependency "ring" 00:02:37.666 Message: lib/rcu: Defining dependency "rcu" 00:02:37.666 Message: lib/mempool: Defining dependency "mempool" 00:02:37.666 Message: lib/mbuf: Defining dependency "mbuf" 00:02:37.666 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:37.666 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:37.666 Compiler for C supports arguments -mpclmul: YES 00:02:37.666 Compiler for C supports arguments -maes: YES 00:02:37.666 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:37.666 Compiler for C supports arguments -mavx512bw: YES 00:02:37.667 Compiler for C supports arguments -mavx512dq: YES 00:02:37.667 Compiler for C supports arguments -mavx512vl: YES 00:02:37.667 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:37.667 Compiler for C supports arguments -mavx2: YES 00:02:37.667 Compiler for C supports arguments -mavx: YES 00:02:37.667 Message: lib/net: Defining dependency "net" 00:02:37.667 Message: lib/meter: Defining dependency "meter" 00:02:37.667 Message: lib/ethdev: Defining dependency "ethdev" 00:02:37.667 Message: lib/pci: Defining dependency "pci" 00:02:37.667 Message: lib/cmdline: Defining dependency "cmdline" 00:02:37.667 Message: lib/metrics: Defining dependency "metrics" 00:02:37.667 Message: lib/hash: Defining dependency "hash" 00:02:37.667 Message: lib/timer: Defining dependency "timer" 00:02:37.667 Fetching value of define "__AVX2__" : 1 (cached) 00:02:37.667 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:37.667 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:02:37.667 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:02:37.667 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:02:37.667 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:02:37.667 Message: lib/acl: Defining dependency "acl" 00:02:37.667 Message: lib/bbdev: Defining dependency "bbdev" 00:02:37.667 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:37.667 Run-time dependency libelf found: YES 0.190 00:02:37.667 Message: lib/bpf: Defining dependency "bpf" 00:02:37.667 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:37.667 Message: lib/compressdev: Defining dependency "compressdev" 00:02:37.667 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:37.667 Message: lib/distributor: Defining dependency "distributor" 00:02:37.667 Message: lib/efd: Defining dependency "efd" 00:02:37.667 Message: lib/eventdev: Defining dependency "eventdev" 00:02:37.667 Message: lib/gpudev: Defining dependency "gpudev" 00:02:37.667 Message: lib/gro: Defining dependency "gro" 00:02:37.667 Message: lib/gso: Defining dependency "gso" 00:02:37.667 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:37.667 Message: lib/jobstats: Defining dependency "jobstats" 00:02:37.667 Message: lib/latencystats: Defining dependency "latencystats" 00:02:37.667 Message: lib/lpm: Defining dependency "lpm" 00:02:37.667 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:37.667 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:37.667 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:37.667 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:37.667 Message: lib/member: Defining dependency "member" 00:02:37.667 Message: lib/pcapng: Defining dependency "pcapng" 00:02:37.667 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:37.667 Message: lib/power: Defining dependency "power" 00:02:37.667 Message: lib/rawdev: Defining dependency "rawdev" 00:02:37.667 Message: lib/regexdev: Defining dependency "regexdev" 00:02:37.667 Message: lib/dmadev: Defining dependency "dmadev" 00:02:37.667 Message: lib/rib: Defining dependency "rib" 00:02:37.667 Message: lib/reorder: Defining dependency "reorder" 00:02:37.667 Message: lib/sched: Defining dependency "sched" 00:02:37.667 Message: lib/security: Defining dependency "security" 00:02:37.667 Message: lib/stack: Defining dependency "stack" 00:02:37.667 Has header "linux/userfaultfd.h" : YES 00:02:37.667 Message: lib/vhost: Defining dependency "vhost" 00:02:37.667 Message: lib/ipsec: Defining dependency "ipsec" 00:02:37.667 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:37.667 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:02:37.667 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:02:37.667 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:37.667 Message: lib/fib: Defining dependency "fib" 00:02:37.667 Message: lib/port: Defining dependency "port" 00:02:37.667 Message: lib/pdump: Defining dependency "pdump" 00:02:37.667 Message: lib/table: Defining dependency "table" 00:02:37.667 Message: lib/pipeline: Defining dependency "pipeline" 00:02:37.667 Message: lib/graph: Defining dependency "graph" 00:02:37.667 Message: lib/node: Defining dependency "node" 00:02:37.667 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:37.667 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:37.667 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:37.667 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:37.667 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:37.667 Compiler for C supports arguments -Wno-unused-value: YES 00:02:37.667 Compiler for C supports arguments -Wno-format: YES 00:02:37.667 Compiler for C supports arguments -Wno-format-security: YES 00:02:37.667 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:39.042 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:39.042 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:39.042 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:39.042 Fetching value of define "__AVX2__" : 1 (cached) 00:02:39.042 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:39.042 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:39.042 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:39.042 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:39.042 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:39.042 Program doxygen found: YES (/usr/bin/doxygen) 00:02:39.042 Configuring doxy-api.conf using configuration 00:02:39.042 Program sphinx-build found: NO 00:02:39.042 Configuring rte_build_config.h using configuration 00:02:39.042 Message: 00:02:39.042 ================= 00:02:39.042 Applications Enabled 00:02:39.042 ================= 00:02:39.042 00:02:39.042 apps: 00:02:39.042 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:39.042 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:39.042 test-security-perf, 00:02:39.042 00:02:39.042 Message: 00:02:39.042 ================= 00:02:39.042 Libraries Enabled 00:02:39.042 ================= 00:02:39.042 00:02:39.042 libs: 00:02:39.042 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:39.042 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:39.042 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:39.042 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:39.042 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:39.042 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:39.042 table, pipeline, graph, node, 00:02:39.042 00:02:39.042 Message: 00:02:39.042 =============== 00:02:39.042 Drivers Enabled 00:02:39.042 =============== 00:02:39.042 00:02:39.042 common: 00:02:39.042 00:02:39.042 bus: 00:02:39.042 pci, vdev, 00:02:39.042 mempool: 00:02:39.042 ring, 00:02:39.042 dma: 00:02:39.042 00:02:39.042 net: 00:02:39.042 i40e, 00:02:39.042 raw: 00:02:39.042 00:02:39.042 crypto: 00:02:39.042 00:02:39.042 compress: 00:02:39.042 00:02:39.042 regex: 00:02:39.042 00:02:39.042 vdpa: 00:02:39.042 00:02:39.042 event: 00:02:39.042 00:02:39.042 baseband: 00:02:39.042 00:02:39.042 gpu: 00:02:39.042 00:02:39.042 00:02:39.042 Message: 00:02:39.042 ================= 00:02:39.042 Content Skipped 00:02:39.042 ================= 00:02:39.042 00:02:39.042 apps: 00:02:39.042 00:02:39.042 libs: 00:02:39.042 kni: explicitly disabled via build config (deprecated lib) 00:02:39.042 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:39.042 00:02:39.042 drivers: 00:02:39.042 common/cpt: not in enabled drivers build config 00:02:39.042 common/dpaax: not in enabled drivers build config 00:02:39.042 common/iavf: not in enabled drivers build config 00:02:39.042 common/idpf: not in enabled drivers build config 00:02:39.042 common/mvep: not in enabled drivers build config 00:02:39.042 common/octeontx: not in enabled drivers build config 00:02:39.042 bus/auxiliary: not in enabled drivers build config 00:02:39.042 bus/dpaa: not in enabled drivers build config 00:02:39.042 bus/fslmc: not in enabled drivers build config 00:02:39.042 bus/ifpga: not in enabled drivers build config 00:02:39.042 bus/vmbus: not in enabled drivers build config 00:02:39.042 common/cnxk: not in enabled drivers build config 00:02:39.042 common/mlx5: not in enabled drivers build config 00:02:39.042 common/qat: not in enabled drivers build config 00:02:39.042 common/sfc_efx: not in enabled drivers build config 00:02:39.042 mempool/bucket: not in enabled drivers build config 00:02:39.042 mempool/cnxk: not in enabled drivers build config 00:02:39.042 mempool/dpaa: not in enabled drivers build config 00:02:39.042 mempool/dpaa2: not in enabled drivers build config 00:02:39.042 mempool/octeontx: not in enabled drivers build config 00:02:39.042 mempool/stack: not in enabled drivers build config 00:02:39.042 dma/cnxk: not in enabled drivers build config 00:02:39.042 dma/dpaa: not in enabled drivers build config 00:02:39.042 dma/dpaa2: not in enabled drivers build config 00:02:39.042 dma/hisilicon: not in enabled drivers build config 00:02:39.042 dma/idxd: not in enabled drivers build config 00:02:39.042 dma/ioat: not in enabled drivers build config 00:02:39.042 dma/skeleton: not in enabled drivers build config 00:02:39.042 net/af_packet: not in enabled drivers build config 00:02:39.042 net/af_xdp: not in enabled drivers build config 00:02:39.042 net/ark: not in enabled drivers build config 00:02:39.042 net/atlantic: not in enabled drivers build config 00:02:39.042 net/avp: not in enabled drivers build config 00:02:39.042 net/axgbe: not in enabled drivers build config 00:02:39.042 net/bnx2x: not in enabled drivers build config 00:02:39.042 net/bnxt: not in enabled drivers build config 00:02:39.042 net/bonding: not in enabled drivers build config 00:02:39.042 net/cnxk: not in enabled drivers build config 00:02:39.042 net/cxgbe: not in enabled drivers build config 00:02:39.042 net/dpaa: not in enabled drivers build config 00:02:39.042 net/dpaa2: not in enabled drivers build config 00:02:39.042 net/e1000: not in enabled drivers build config 00:02:39.042 net/ena: not in enabled drivers build config 00:02:39.042 net/enetc: not in enabled drivers build config 00:02:39.042 net/enetfec: not in enabled drivers build config 00:02:39.042 net/enic: not in enabled drivers build config 00:02:39.042 net/failsafe: not in enabled drivers build config 00:02:39.042 net/fm10k: not in enabled drivers build config 00:02:39.042 net/gve: not in enabled drivers build config 00:02:39.042 net/hinic: not in enabled drivers build config 00:02:39.042 net/hns3: not in enabled drivers build config 00:02:39.042 net/iavf: not in enabled drivers build config 00:02:39.042 net/ice: not in enabled drivers build config 00:02:39.042 net/idpf: not in enabled drivers build config 00:02:39.042 net/igc: not in enabled drivers build config 00:02:39.042 net/ionic: not in enabled drivers build config 00:02:39.042 net/ipn3ke: not in enabled drivers build config 00:02:39.042 net/ixgbe: not in enabled drivers build config 00:02:39.042 net/kni: not in enabled drivers build config 00:02:39.042 net/liquidio: not in enabled drivers build config 00:02:39.042 net/mana: not in enabled drivers build config 00:02:39.042 net/memif: not in enabled drivers build config 00:02:39.042 net/mlx4: not in enabled drivers build config 00:02:39.042 net/mlx5: not in enabled drivers build config 00:02:39.042 net/mvneta: not in enabled drivers build config 00:02:39.042 net/mvpp2: not in enabled drivers build config 00:02:39.042 net/netvsc: not in enabled drivers build config 00:02:39.042 net/nfb: not in enabled drivers build config 00:02:39.042 net/nfp: not in enabled drivers build config 00:02:39.042 net/ngbe: not in enabled drivers build config 00:02:39.042 net/null: not in enabled drivers build config 00:02:39.042 net/octeontx: not in enabled drivers build config 00:02:39.042 net/octeon_ep: not in enabled drivers build config 00:02:39.042 net/pcap: not in enabled drivers build config 00:02:39.042 net/pfe: not in enabled drivers build config 00:02:39.042 net/qede: not in enabled drivers build config 00:02:39.042 net/ring: not in enabled drivers build config 00:02:39.042 net/sfc: not in enabled drivers build config 00:02:39.042 net/softnic: not in enabled drivers build config 00:02:39.042 net/tap: not in enabled drivers build config 00:02:39.042 net/thunderx: not in enabled drivers build config 00:02:39.042 net/txgbe: not in enabled drivers build config 00:02:39.042 net/vdev_netvsc: not in enabled drivers build config 00:02:39.042 net/vhost: not in enabled drivers build config 00:02:39.042 net/virtio: not in enabled drivers build config 00:02:39.042 net/vmxnet3: not in enabled drivers build config 00:02:39.042 raw/cnxk_bphy: not in enabled drivers build config 00:02:39.042 raw/cnxk_gpio: not in enabled drivers build config 00:02:39.042 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:39.042 raw/ifpga: not in enabled drivers build config 00:02:39.042 raw/ntb: not in enabled drivers build config 00:02:39.042 raw/skeleton: not in enabled drivers build config 00:02:39.042 crypto/armv8: not in enabled drivers build config 00:02:39.042 crypto/bcmfs: not in enabled drivers build config 00:02:39.042 crypto/caam_jr: not in enabled drivers build config 00:02:39.042 crypto/ccp: not in enabled drivers build config 00:02:39.042 crypto/cnxk: not in enabled drivers build config 00:02:39.042 crypto/dpaa_sec: not in enabled drivers build config 00:02:39.042 crypto/dpaa2_sec: not in enabled drivers build config 00:02:39.042 crypto/ipsec_mb: not in enabled drivers build config 00:02:39.042 crypto/mlx5: not in enabled drivers build config 00:02:39.042 crypto/mvsam: not in enabled drivers build config 00:02:39.042 crypto/nitrox: not in enabled drivers build config 00:02:39.042 crypto/null: not in enabled drivers build config 00:02:39.042 crypto/octeontx: not in enabled drivers build config 00:02:39.042 crypto/openssl: not in enabled drivers build config 00:02:39.042 crypto/scheduler: not in enabled drivers build config 00:02:39.042 crypto/uadk: not in enabled drivers build config 00:02:39.043 crypto/virtio: not in enabled drivers build config 00:02:39.043 compress/isal: not in enabled drivers build config 00:02:39.043 compress/mlx5: not in enabled drivers build config 00:02:39.043 compress/octeontx: not in enabled drivers build config 00:02:39.043 compress/zlib: not in enabled drivers build config 00:02:39.043 regex/mlx5: not in enabled drivers build config 00:02:39.043 regex/cn9k: not in enabled drivers build config 00:02:39.043 vdpa/ifc: not in enabled drivers build config 00:02:39.043 vdpa/mlx5: not in enabled drivers build config 00:02:39.043 vdpa/sfc: not in enabled drivers build config 00:02:39.043 event/cnxk: not in enabled drivers build config 00:02:39.043 event/dlb2: not in enabled drivers build config 00:02:39.043 event/dpaa: not in enabled drivers build config 00:02:39.043 event/dpaa2: not in enabled drivers build config 00:02:39.043 event/dsw: not in enabled drivers build config 00:02:39.043 event/opdl: not in enabled drivers build config 00:02:39.043 event/skeleton: not in enabled drivers build config 00:02:39.043 event/sw: not in enabled drivers build config 00:02:39.043 event/octeontx: not in enabled drivers build config 00:02:39.043 baseband/acc: not in enabled drivers build config 00:02:39.043 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:39.043 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:39.043 baseband/la12xx: not in enabled drivers build config 00:02:39.043 baseband/null: not in enabled drivers build config 00:02:39.043 baseband/turbo_sw: not in enabled drivers build config 00:02:39.043 gpu/cuda: not in enabled drivers build config 00:02:39.043 00:02:39.043 00:02:39.043 Build targets in project: 314 00:02:39.043 00:02:39.043 DPDK 22.11.4 00:02:39.043 00:02:39.043 User defined options 00:02:39.043 libdir : lib 00:02:39.043 prefix : /home/vagrant/spdk_repo/dpdk/build 00:02:39.043 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:39.043 c_link_args : 00:02:39.043 enable_docs : false 00:02:39.043 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:39.043 enable_kmods : false 00:02:39.043 machine : native 00:02:39.043 tests : false 00:02:39.043 00:02:39.043 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:39.043 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:39.043 07:48:44 -- common/autobuild_common.sh@186 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 00:02:39.043 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:02:39.043 [1/743] Generating lib/rte_kvargs_mingw with a custom command 00:02:39.043 [2/743] Generating lib/rte_telemetry_def with a custom command 00:02:39.043 [3/743] Generating lib/rte_telemetry_mingw with a custom command 00:02:39.043 [4/743] Generating lib/rte_kvargs_def with a custom command 00:02:39.043 [5/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:39.043 [6/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:39.043 [7/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:39.043 [8/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:39.043 [9/743] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:39.043 [10/743] Linking static target lib/librte_kvargs.a 00:02:39.043 [11/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:39.043 [12/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:39.301 [13/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:39.301 [14/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:39.301 [15/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:39.301 [16/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:39.301 [17/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:39.301 [18/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:39.301 [19/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:39.301 [20/743] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.301 [21/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:39.301 [22/743] Linking target lib/librte_kvargs.so.23.0 00:02:39.301 [23/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:39.559 [24/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:39.559 [25/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:39.559 [26/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:39.559 [27/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:39.559 [28/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:39.559 [29/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:39.559 [30/743] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:39.559 [31/743] Linking static target lib/librte_telemetry.a 00:02:39.559 [32/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:39.559 [33/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:39.559 [34/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:39.817 [35/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:39.817 [36/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:39.817 [37/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:39.817 [38/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:39.817 [39/743] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:39.817 [40/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:39.817 [41/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:39.817 [42/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:40.076 [43/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:40.076 [44/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:40.076 [45/743] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.076 [46/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:40.076 [47/743] Linking target lib/librte_telemetry.so.23.0 00:02:40.076 [48/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:40.076 [49/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:40.076 [50/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:40.076 [51/743] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:40.334 [52/743] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:40.334 [53/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:40.334 [54/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:40.334 [55/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:40.334 [56/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:40.334 [57/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:40.334 [58/743] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:40.334 [59/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:40.334 [60/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:40.334 [61/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:40.334 [62/743] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:40.334 [63/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:40.334 [64/743] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:40.334 [65/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:40.334 [66/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:40.593 [67/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:40.593 [68/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:40.593 [69/743] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:40.593 [70/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:40.593 [71/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:40.593 [72/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:40.593 [73/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:40.593 [74/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:40.593 [75/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:40.593 [76/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:40.593 [77/743] Generating lib/rte_eal_def with a custom command 00:02:40.593 [78/743] Generating lib/rte_eal_mingw with a custom command 00:02:40.593 [79/743] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:40.593 [80/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:40.593 [81/743] Generating lib/rte_ring_def with a custom command 00:02:40.593 [82/743] Generating lib/rte_ring_mingw with a custom command 00:02:40.593 [83/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:40.593 [84/743] Generating lib/rte_rcu_def with a custom command 00:02:40.593 [85/743] Generating lib/rte_rcu_mingw with a custom command 00:02:40.874 [86/743] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:40.874 [87/743] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:40.874 [88/743] Linking static target lib/librte_ring.a 00:02:40.874 [89/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:40.874 [90/743] Generating lib/rte_mempool_def with a custom command 00:02:40.874 [91/743] Generating lib/rte_mempool_mingw with a custom command 00:02:40.874 [92/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:40.874 [93/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:41.131 [94/743] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.131 [95/743] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:41.131 [96/743] Linking static target lib/librte_eal.a 00:02:41.389 [97/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:41.389 [98/743] Generating lib/rte_mbuf_def with a custom command 00:02:41.389 [99/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:41.389 [100/743] Generating lib/rte_mbuf_mingw with a custom command 00:02:41.647 [101/743] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:41.647 [102/743] Linking static target lib/librte_rcu.a 00:02:41.647 [103/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:41.647 [104/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:41.647 [105/743] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:41.647 [106/743] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:41.647 [107/743] Linking static target lib/librte_mempool.a 00:02:41.905 [108/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:41.905 [109/743] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.905 [110/743] Generating lib/rte_net_def with a custom command 00:02:41.905 [111/743] Generating lib/rte_net_mingw with a custom command 00:02:41.905 [112/743] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:41.905 [113/743] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:41.905 [114/743] Generating lib/rte_meter_def with a custom command 00:02:41.905 [115/743] Generating lib/rte_meter_mingw with a custom command 00:02:42.164 [116/743] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:42.164 [117/743] Linking static target lib/librte_meter.a 00:02:42.164 [118/743] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:42.164 [119/743] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:42.164 [120/743] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:42.422 [121/743] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.422 [122/743] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:42.422 [123/743] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:42.422 [124/743] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:42.422 [125/743] Linking static target lib/librte_mbuf.a 00:02:42.422 [126/743] Linking static target lib/librte_net.a 00:02:42.679 [127/743] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.679 [128/743] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.937 [129/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:42.937 [130/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:42.937 [131/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:42.937 [132/743] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:42.937 [133/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:42.937 [134/743] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.195 [135/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:43.452 [136/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:43.710 [137/743] Generating lib/rte_ethdev_def with a custom command 00:02:43.710 [138/743] Generating lib/rte_ethdev_mingw with a custom command 00:02:43.710 [139/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:43.710 [140/743] Generating lib/rte_pci_def with a custom command 00:02:43.710 [141/743] Generating lib/rte_pci_mingw with a custom command 00:02:43.710 [142/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:43.710 [143/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:43.710 [144/743] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:43.710 [145/743] Linking static target lib/librte_pci.a 00:02:43.710 [146/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:43.710 [147/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:43.710 [148/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:43.968 [149/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:43.968 [150/743] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.968 [151/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:43.968 [152/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:43.968 [153/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:43.968 [154/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:43.968 [155/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:43.968 [156/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:43.968 [157/743] Generating lib/rte_cmdline_def with a custom command 00:02:43.968 [158/743] Generating lib/rte_cmdline_mingw with a custom command 00:02:43.968 [159/743] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:43.968 [160/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:44.226 [161/743] Generating lib/rte_metrics_def with a custom command 00:02:44.226 [162/743] Generating lib/rte_metrics_mingw with a custom command 00:02:44.226 [163/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:44.226 [164/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:44.226 [165/743] Generating lib/rte_hash_def with a custom command 00:02:44.226 [166/743] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:44.226 [167/743] Generating lib/rte_hash_mingw with a custom command 00:02:44.226 [168/743] Generating lib/rte_timer_def with a custom command 00:02:44.226 [169/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:44.227 [170/743] Generating lib/rte_timer_mingw with a custom command 00:02:44.484 [171/743] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:44.484 [172/743] Linking static target lib/librte_cmdline.a 00:02:44.484 [173/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:44.742 [174/743] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:44.742 [175/743] Linking static target lib/librte_metrics.a 00:02:44.742 [176/743] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:44.742 [177/743] Linking static target lib/librte_timer.a 00:02:45.000 [178/743] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.000 [179/743] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.258 [180/743] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:45.258 [181/743] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:45.258 [182/743] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:45.258 [183/743] Linking static target lib/librte_ethdev.a 00:02:45.258 [184/743] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:45.824 [185/743] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:45.824 [186/743] Generating lib/rte_acl_def with a custom command 00:02:45.824 [187/743] Generating lib/rte_acl_mingw with a custom command 00:02:45.824 [188/743] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:45.824 [189/743] Generating lib/rte_bbdev_def with a custom command 00:02:45.824 [190/743] Generating lib/rte_bbdev_mingw with a custom command 00:02:46.082 [191/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:46.082 [192/743] Generating lib/rte_bitratestats_def with a custom command 00:02:46.082 [193/743] Generating lib/rte_bitratestats_mingw with a custom command 00:02:46.340 [194/743] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:46.598 [195/743] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:46.598 [196/743] Linking static target lib/librte_bitratestats.a 00:02:46.598 [197/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:46.856 [198/743] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:46.856 [199/743] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:46.856 [200/743] Linking static target lib/librte_bbdev.a 00:02:46.856 [201/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:47.114 [202/743] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:47.114 [203/743] Linking static target lib/librte_hash.a 00:02:47.370 [204/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:47.370 [205/743] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.370 [206/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:47.371 [207/743] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:02:47.371 [208/743] Linking static target lib/acl/libavx512_tmp.a 00:02:47.627 [209/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:47.883 [210/743] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.883 [211/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:47.883 [212/743] Generating lib/rte_bpf_def with a custom command 00:02:47.883 [213/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:47.883 [214/743] Generating lib/rte_bpf_mingw with a custom command 00:02:47.883 [215/743] Generating lib/rte_cfgfile_def with a custom command 00:02:47.883 [216/743] Generating lib/rte_cfgfile_mingw with a custom command 00:02:48.139 [217/743] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:48.139 [218/743] Linking static target lib/librte_acl.a 00:02:48.139 [219/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:48.139 [220/743] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:48.396 [221/743] Linking static target lib/librte_cfgfile.a 00:02:48.396 [222/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:48.396 [223/743] Generating lib/rte_compressdev_def with a custom command 00:02:48.396 [224/743] Generating lib/rte_compressdev_mingw with a custom command 00:02:48.396 [225/743] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.652 [226/743] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.652 [227/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:48.652 [228/743] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.652 [229/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:48.652 [230/743] Generating lib/rte_cryptodev_def with a custom command 00:02:48.652 [231/743] Linking target lib/librte_eal.so.23.0 00:02:48.652 [232/743] Generating lib/rte_cryptodev_mingw with a custom command 00:02:48.914 [233/743] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:48.914 [234/743] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:48.914 [235/743] Linking static target lib/librte_bpf.a 00:02:48.914 [236/743] Linking target lib/librte_ring.so.23.0 00:02:48.914 [237/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:48.914 [238/743] Linking target lib/librte_meter.so.23.0 00:02:48.914 [239/743] Linking target lib/librte_pci.so.23.0 00:02:48.914 [240/743] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:48.914 [241/743] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:48.914 [242/743] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:48.914 [243/743] Linking target lib/librte_rcu.so.23.0 00:02:48.914 [244/743] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:48.914 [245/743] Linking target lib/librte_mempool.so.23.0 00:02:48.914 [246/743] Linking target lib/librte_timer.so.23.0 00:02:48.914 [247/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:49.171 [248/743] Linking target lib/librte_acl.so.23.0 00:02:49.171 [249/743] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:49.171 [250/743] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:49.171 [251/743] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:49.171 [252/743] Linking static target lib/librte_compressdev.a 00:02:49.171 [253/743] Linking target lib/librte_cfgfile.so.23.0 00:02:49.171 [254/743] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:49.171 [255/743] Linking target lib/librte_mbuf.so.23.0 00:02:49.171 [256/743] Generating lib/rte_distributor_def with a custom command 00:02:49.171 [257/743] Generating lib/rte_distributor_mingw with a custom command 00:02:49.427 [258/743] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:49.427 [259/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:49.427 [260/743] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.427 [261/743] Linking target lib/librte_net.so.23.0 00:02:49.427 [262/743] Linking target lib/librte_bbdev.so.23.0 00:02:49.427 [263/743] Generating lib/rte_efd_def with a custom command 00:02:49.427 [264/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:49.427 [265/743] Generating lib/rte_efd_mingw with a custom command 00:02:49.428 [266/743] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:49.428 [267/743] Linking target lib/librte_cmdline.so.23.0 00:02:49.684 [268/743] Linking target lib/librte_hash.so.23.0 00:02:49.684 [269/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:49.684 [270/743] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:49.684 [271/743] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:49.684 [272/743] Linking static target lib/librte_distributor.a 00:02:49.941 [273/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:49.941 [274/743] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.941 [275/743] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:49.941 [276/743] Linking target lib/librte_distributor.so.23.0 00:02:50.198 [277/743] Linking target lib/librte_compressdev.so.23.0 00:02:50.198 [278/743] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.198 [279/743] Linking target lib/librte_ethdev.so.23.0 00:02:50.198 [280/743] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:50.198 [281/743] Generating lib/rte_eventdev_def with a custom command 00:02:50.198 [282/743] Generating lib/rte_eventdev_mingw with a custom command 00:02:50.198 [283/743] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:50.198 [284/743] Linking target lib/librte_metrics.so.23.0 00:02:50.455 [285/743] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:50.456 [286/743] Linking target lib/librte_bitratestats.so.23.0 00:02:50.456 [287/743] Linking target lib/librte_bpf.so.23.0 00:02:50.713 [288/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:50.713 [289/743] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:50.713 [290/743] Generating lib/rte_gpudev_def with a custom command 00:02:50.713 [291/743] Generating lib/rte_gpudev_mingw with a custom command 00:02:50.713 [292/743] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:50.971 [293/743] Linking static target lib/librte_efd.a 00:02:50.971 [294/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:50.971 [295/743] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.971 [296/743] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:50.971 [297/743] Linking static target lib/librte_cryptodev.a 00:02:51.228 [298/743] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:51.228 [299/743] Linking static target lib/librte_gpudev.a 00:02:51.228 [300/743] Linking target lib/librte_efd.so.23.0 00:02:51.229 [301/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:51.486 [302/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:51.486 [303/743] Generating lib/rte_gro_def with a custom command 00:02:51.486 [304/743] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:51.487 [305/743] Generating lib/rte_gro_mingw with a custom command 00:02:51.487 [306/743] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:51.487 [307/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:51.744 [308/743] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:52.002 [309/743] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.002 [310/743] Linking target lib/librte_gpudev.so.23.0 00:02:52.002 [311/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:52.002 [312/743] Generating lib/rte_gso_def with a custom command 00:02:52.002 [313/743] Generating lib/rte_gso_mingw with a custom command 00:02:52.002 [314/743] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:52.002 [315/743] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:52.002 [316/743] Linking static target lib/librte_gro.a 00:02:52.002 [317/743] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:52.260 [318/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:52.260 [319/743] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.260 [320/743] Linking target lib/librte_gro.so.23.0 00:02:52.260 [321/743] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:52.518 [322/743] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:52.519 [323/743] Generating lib/rte_ip_frag_def with a custom command 00:02:52.519 [324/743] Generating lib/rte_ip_frag_mingw with a custom command 00:02:52.519 [325/743] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:52.519 [326/743] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:52.519 [327/743] Linking static target lib/librte_jobstats.a 00:02:52.519 [328/743] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:52.519 [329/743] Linking static target lib/librte_eventdev.a 00:02:52.777 [330/743] Linking static target lib/librte_gso.a 00:02:52.777 [331/743] Generating lib/rte_jobstats_def with a custom command 00:02:52.777 [332/743] Generating lib/rte_jobstats_mingw with a custom command 00:02:52.777 [333/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:52.777 [334/743] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.777 [335/743] Linking target lib/librte_gso.so.23.0 00:02:52.777 [336/743] Generating lib/rte_latencystats_def with a custom command 00:02:53.041 [337/743] Generating lib/rte_latencystats_mingw with a custom command 00:02:53.042 [338/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:53.042 [339/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:53.042 [340/743] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.042 [341/743] Generating lib/rte_lpm_def with a custom command 00:02:53.042 [342/743] Linking target lib/librte_jobstats.so.23.0 00:02:53.042 [343/743] Generating lib/rte_lpm_mingw with a custom command 00:02:53.042 [344/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:53.312 [345/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:53.312 [346/743] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:53.312 [347/743] Linking static target lib/librte_ip_frag.a 00:02:53.312 [348/743] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.312 [349/743] Linking target lib/librte_cryptodev.so.23.0 00:02:53.570 [350/743] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:53.570 [351/743] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.570 [352/743] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:53.570 [353/743] Linking target lib/librte_ip_frag.so.23.0 00:02:53.570 [354/743] Linking static target lib/librte_latencystats.a 00:02:53.828 [355/743] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:53.828 [356/743] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:53.828 [357/743] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:53.828 [358/743] Generating lib/rte_member_def with a custom command 00:02:53.828 [359/743] Generating lib/rte_member_mingw with a custom command 00:02:53.828 [360/743] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:53.828 [361/743] Generating lib/rte_pcapng_def with a custom command 00:02:53.828 [362/743] Generating lib/rte_pcapng_mingw with a custom command 00:02:53.828 [363/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:53.828 [364/743] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.828 [365/743] Linking target lib/librte_latencystats.so.23.0 00:02:53.828 [366/743] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:53.828 [367/743] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:54.086 [368/743] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:54.086 [369/743] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:54.086 [370/743] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:54.345 [371/743] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:54.345 [372/743] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:54.345 [373/743] Generating lib/rte_power_def with a custom command 00:02:54.345 [374/743] Generating lib/rte_power_mingw with a custom command 00:02:54.345 [375/743] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:54.603 [376/743] Linking static target lib/librte_lpm.a 00:02:54.603 [377/743] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.603 [378/743] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:54.603 [379/743] Generating lib/rte_rawdev_def with a custom command 00:02:54.603 [380/743] Linking target lib/librte_eventdev.so.23.0 00:02:54.603 [381/743] Generating lib/rte_rawdev_mingw with a custom command 00:02:54.603 [382/743] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:54.603 [383/743] Generating lib/rte_regexdev_def with a custom command 00:02:54.861 [384/743] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:54.861 [385/743] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:54.861 [386/743] Generating lib/rte_regexdev_mingw with a custom command 00:02:54.861 [387/743] Linking static target lib/librte_pcapng.a 00:02:54.861 [388/743] Generating lib/rte_dmadev_def with a custom command 00:02:54.861 [389/743] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:54.861 [390/743] Generating lib/rte_dmadev_mingw with a custom command 00:02:54.861 [391/743] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.861 [392/743] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:54.861 [393/743] Linking static target lib/librte_rawdev.a 00:02:54.861 [394/743] Linking target lib/librte_lpm.so.23.0 00:02:54.861 [395/743] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:54.861 [396/743] Generating lib/rte_rib_def with a custom command 00:02:54.861 [397/743] Generating lib/rte_rib_mingw with a custom command 00:02:54.861 [398/743] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:55.120 [399/743] Generating lib/rte_reorder_def with a custom command 00:02:55.120 [400/743] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.120 [401/743] Generating lib/rte_reorder_mingw with a custom command 00:02:55.120 [402/743] Linking target lib/librte_pcapng.so.23.0 00:02:55.120 [403/743] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:55.120 [404/743] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:55.120 [405/743] Linking static target lib/librte_dmadev.a 00:02:55.120 [406/743] Linking static target lib/librte_power.a 00:02:55.120 [407/743] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:55.378 [408/743] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.378 [409/743] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:55.378 [410/743] Linking target lib/librte_rawdev.so.23.0 00:02:55.378 [411/743] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:55.378 [412/743] Linking static target lib/librte_regexdev.a 00:02:55.378 [413/743] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:55.378 [414/743] Generating lib/rte_sched_def with a custom command 00:02:55.378 [415/743] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:55.378 [416/743] Generating lib/rte_sched_mingw with a custom command 00:02:55.637 [417/743] Generating lib/rte_security_def with a custom command 00:02:55.637 [418/743] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:55.637 [419/743] Generating lib/rte_security_mingw with a custom command 00:02:55.637 [420/743] Linking static target lib/librte_member.a 00:02:55.637 [421/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:55.637 [422/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:55.637 [423/743] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.637 [424/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:55.637 [425/743] Linking target lib/librte_dmadev.so.23.0 00:02:55.894 [426/743] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:55.894 [427/743] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:55.894 [428/743] Generating lib/rte_stack_def with a custom command 00:02:55.894 [429/743] Linking static target lib/librte_reorder.a 00:02:55.894 [430/743] Linking static target lib/librte_stack.a 00:02:55.894 [431/743] Generating lib/rte_stack_mingw with a custom command 00:02:55.894 [432/743] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.894 [433/743] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:55.894 [434/743] Linking target lib/librte_member.so.23.0 00:02:55.894 [435/743] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:55.894 [436/743] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.894 [437/743] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:56.152 [438/743] Linking static target lib/librte_rib.a 00:02:56.152 [439/743] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.152 [440/743] Linking target lib/librte_stack.so.23.0 00:02:56.152 [441/743] Linking target lib/librte_reorder.so.23.0 00:02:56.152 [442/743] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.153 [443/743] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.153 [444/743] Linking target lib/librte_power.so.23.0 00:02:56.153 [445/743] Linking target lib/librte_regexdev.so.23.0 00:02:56.411 [446/743] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:56.411 [447/743] Linking static target lib/librte_security.a 00:02:56.411 [448/743] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.411 [449/743] Linking target lib/librte_rib.so.23.0 00:02:56.669 [450/743] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:56.669 [451/743] Generating lib/rte_vhost_def with a custom command 00:02:56.669 [452/743] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:56.669 [453/743] Generating lib/rte_vhost_mingw with a custom command 00:02:56.669 [454/743] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:56.669 [455/743] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.927 [456/743] Linking target lib/librte_security.so.23.0 00:02:56.927 [457/743] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:56.927 [458/743] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:57.185 [459/743] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:57.185 [460/743] Linking static target lib/librte_sched.a 00:02:57.442 [461/743] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.442 [462/743] Linking target lib/librte_sched.so.23.0 00:02:57.442 [463/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:57.442 [464/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:57.442 [465/743] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:57.701 [466/743] Generating lib/rte_ipsec_def with a custom command 00:02:57.701 [467/743] Generating lib/rte_ipsec_mingw with a custom command 00:02:57.701 [468/743] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:57.701 [469/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:57.701 [470/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:57.701 [471/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:58.266 [472/743] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:58.266 [473/743] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:02:58.266 [474/743] Generating lib/rte_fib_def with a custom command 00:02:58.266 [475/743] Linking static target lib/fib/libtrie_avx512_tmp.a 00:02:58.266 [476/743] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:02:58.266 [477/743] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:02:58.266 [478/743] Generating lib/rte_fib_mingw with a custom command 00:02:58.266 [479/743] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:58.524 [480/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:58.524 [481/743] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:58.524 [482/743] Linking static target lib/librte_ipsec.a 00:02:58.781 [483/743] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.039 [484/743] Linking target lib/librte_ipsec.so.23.0 00:02:59.039 [485/743] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:59.039 [486/743] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:59.039 [487/743] Linking static target lib/librte_fib.a 00:02:59.297 [488/743] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:59.297 [489/743] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:59.297 [490/743] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:59.297 [491/743] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:59.297 [492/743] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.554 [493/743] Linking target lib/librte_fib.so.23.0 00:02:59.554 [494/743] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:03:00.118 [495/743] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:03:00.118 [496/743] Generating lib/rte_port_def with a custom command 00:03:00.118 [497/743] Generating lib/rte_port_mingw with a custom command 00:03:00.118 [498/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:03:00.118 [499/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:03:00.118 [500/743] Generating lib/rte_pdump_def with a custom command 00:03:00.118 [501/743] Generating lib/rte_pdump_mingw with a custom command 00:03:00.376 [502/743] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:03:00.376 [503/743] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:03:00.376 [504/743] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:03:00.633 [505/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:03:00.633 [506/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:03:00.633 [507/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:03:00.633 [508/743] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:03:00.633 [509/743] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:03:00.633 [510/743] Linking static target lib/librte_port.a 00:03:01.198 [511/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:03:01.198 [512/743] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:03:01.198 [513/743] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.198 [514/743] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:03:01.198 [515/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:03:01.198 [516/743] Linking target lib/librte_port.so.23.0 00:03:01.457 [517/743] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:03:01.457 [518/743] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:03:01.457 [519/743] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:03:01.457 [520/743] Linking static target lib/librte_pdump.a 00:03:01.715 [521/743] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.715 [522/743] Linking target lib/librte_pdump.so.23.0 00:03:01.973 [523/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:03:01.973 [524/743] Generating lib/rte_table_def with a custom command 00:03:01.973 [525/743] Generating lib/rte_table_mingw with a custom command 00:03:01.973 [526/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:03:01.973 [527/743] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:03:02.232 [528/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:03:02.232 [529/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:03:02.490 [530/743] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:03:02.490 [531/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:03:02.490 [532/743] Generating lib/rte_pipeline_def with a custom command 00:03:02.490 [533/743] Generating lib/rte_pipeline_mingw with a custom command 00:03:02.749 [534/743] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:03:02.749 [535/743] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:02.749 [536/743] Linking static target lib/librte_table.a 00:03:02.749 [537/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:03:03.007 [538/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:03:03.265 [539/743] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:03:03.265 [540/743] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:03:03.265 [541/743] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.522 [542/743] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:03:03.522 [543/743] Linking target lib/librte_table.so.23.0 00:03:03.522 [544/743] Generating lib/rte_graph_def with a custom command 00:03:03.522 [545/743] Generating lib/rte_graph_mingw with a custom command 00:03:03.522 [546/743] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:03:03.781 [547/743] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:03:03.781 [548/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:03:04.040 [549/743] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:03:04.040 [550/743] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:03:04.040 [551/743] Linking static target lib/librte_graph.a 00:03:04.040 [552/743] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:03:04.323 [553/743] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:03:04.323 [554/743] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:03:04.585 [555/743] Compiling C object lib/librte_node.a.p/node_null.c.o 00:03:04.844 [556/743] Compiling C object lib/librte_node.a.p/node_log.c.o 00:03:04.844 [557/743] Generating lib/rte_node_def with a custom command 00:03:04.844 [558/743] Generating lib/rte_node_mingw with a custom command 00:03:04.844 [559/743] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.844 [560/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:03:04.844 [561/743] Linking target lib/librte_graph.so.23.0 00:03:05.102 [562/743] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:03:05.102 [563/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:05.102 [564/743] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:03:05.102 [565/743] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:03:05.102 [566/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:05.102 [567/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:05.102 [568/743] Generating drivers/rte_bus_pci_def with a custom command 00:03:05.102 [569/743] Generating drivers/rte_bus_pci_mingw with a custom command 00:03:05.360 [570/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:05.360 [571/743] Generating drivers/rte_bus_vdev_def with a custom command 00:03:05.360 [572/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:05.360 [573/743] Generating drivers/rte_bus_vdev_mingw with a custom command 00:03:05.360 [574/743] Generating drivers/rte_mempool_ring_def with a custom command 00:03:05.360 [575/743] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:03:05.360 [576/743] Generating drivers/rte_mempool_ring_mingw with a custom command 00:03:05.360 [577/743] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:03:05.360 [578/743] Linking static target lib/librte_node.a 00:03:05.360 [579/743] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:05.360 [580/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:05.619 [581/743] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:05.619 [582/743] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.619 [583/743] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:05.619 [584/743] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.619 [585/743] Linking target lib/librte_node.so.23.0 00:03:05.619 [586/743] Linking static target drivers/librte_bus_vdev.a 00:03:05.619 [587/743] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:05.878 [588/743] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:05.878 [589/743] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:05.878 [590/743] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.878 [591/743] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:05.878 [592/743] Linking target drivers/librte_bus_vdev.so.23.0 00:03:05.878 [593/743] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.136 [594/743] Linking static target drivers/librte_bus_pci.a 00:03:06.136 [595/743] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:06.136 [596/743] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:03:06.395 [597/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:03:06.395 [598/743] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:06.395 [599/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:03:06.395 [600/743] Linking target drivers/librte_bus_pci.so.23.0 00:03:06.395 [601/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:03:06.653 [602/743] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:03:06.653 [603/743] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:06.653 [604/743] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:06.912 [605/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:03:06.912 [606/743] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:06.912 [607/743] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:06.912 [608/743] Linking static target drivers/librte_mempool_ring.a 00:03:06.912 [609/743] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:06.912 [610/743] Linking target drivers/librte_mempool_ring.so.23.0 00:03:07.170 [611/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:03:07.736 [612/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:03:07.736 [613/743] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:03:07.736 [614/743] Linking static target drivers/net/i40e/base/libi40e_base.a 00:03:08.302 [615/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:03:08.302 [616/743] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:03:08.302 [617/743] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:03:08.868 [618/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:03:08.868 [619/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:03:08.868 [620/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:03:09.127 [621/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:03:09.127 [622/743] Generating drivers/rte_net_i40e_def with a custom command 00:03:09.127 [623/743] Generating drivers/rte_net_i40e_mingw with a custom command 00:03:09.127 [624/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:03:09.385 [625/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:03:10.320 [626/743] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:03:10.578 [627/743] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:03:10.578 [628/743] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:03:10.578 [629/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:03:10.578 [630/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:03:10.578 [631/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:03:10.837 [632/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:03:10.837 [633/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:03:10.837 [634/743] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:03:11.095 [635/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:03:11.095 [636/743] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:03:11.662 [637/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:03:11.662 [638/743] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:03:11.662 [639/743] Linking static target drivers/libtmp_rte_net_i40e.a 00:03:11.920 [640/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:03:11.920 [641/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:03:11.920 [642/743] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:03:11.920 [643/743] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:11.920 [644/743] Linking static target drivers/librte_net_i40e.a 00:03:12.178 [645/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:03:12.178 [646/743] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:03:12.178 [647/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:03:12.436 [648/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:03:12.437 [649/743] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:12.437 [650/743] Linking static target lib/librte_vhost.a 00:03:12.695 [651/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:03:12.695 [652/743] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.695 [653/743] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:03:12.695 [654/743] Linking target drivers/librte_net_i40e.so.23.0 00:03:12.953 [655/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:03:12.953 [656/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:03:13.211 [657/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:03:13.469 [658/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:03:13.727 [659/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:03:13.727 [660/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:03:13.727 [661/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:03:13.727 [662/743] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.727 [663/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:03:13.727 [664/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:03:13.727 [665/743] Linking target lib/librte_vhost.so.23.0 00:03:13.985 [666/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:03:13.985 [667/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:03:13.985 [668/743] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:03:14.244 [669/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:03:14.244 [670/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:03:14.502 [671/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:03:14.760 [672/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:03:14.760 [673/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:03:15.018 [674/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:03:15.276 [675/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:03:15.535 [676/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:03:15.535 [677/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:03:15.535 [678/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:03:15.793 [679/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:03:15.793 [680/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:03:16.051 [681/743] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:03:16.051 [682/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:03:16.337 [683/743] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:03:16.337 [684/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:03:16.337 [685/743] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:03:16.595 [686/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:03:16.595 [687/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:03:16.595 [688/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:03:16.853 [689/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:03:16.853 [690/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:03:16.853 [691/743] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:03:16.853 [692/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:03:17.111 [693/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:03:17.111 [694/743] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:03:17.369 [695/743] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:03:17.369 [696/743] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:03:17.625 [697/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:03:17.882 [698/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:03:17.882 [699/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:03:18.445 [700/743] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:03:18.445 [701/743] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:03:18.445 [702/743] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:03:18.758 [703/743] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:03:18.758 [704/743] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:03:18.758 [705/743] Linking static target lib/librte_pipeline.a 00:03:18.758 [706/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:03:18.758 [707/743] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:03:19.321 [708/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:03:19.321 [709/743] Linking target app/dpdk-pdump 00:03:19.321 [710/743] Linking target app/dpdk-dumpcap 00:03:19.321 [711/743] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:03:19.579 [712/743] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:03:19.579 [713/743] Linking target app/dpdk-proc-info 00:03:19.579 [714/743] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:03:19.836 [715/743] Linking target app/dpdk-test-bbdev 00:03:19.836 [716/743] Linking target app/dpdk-test-acl 00:03:19.836 [717/743] Linking target app/dpdk-test-cmdline 00:03:20.093 [718/743] Linking target app/dpdk-test-compress-perf 00:03:20.093 [719/743] Linking target app/dpdk-test-crypto-perf 00:03:20.093 [720/743] Linking target app/dpdk-test-eventdev 00:03:20.093 [721/743] Linking target app/dpdk-test-fib 00:03:20.093 [722/743] Linking target app/dpdk-test-flow-perf 00:03:20.350 [723/743] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:03:20.350 [724/743] Linking target app/dpdk-test-gpudev 00:03:20.350 [725/743] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:03:20.608 [726/743] Linking target app/dpdk-test-pipeline 00:03:20.865 [727/743] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:03:20.865 [728/743] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:03:20.865 [729/743] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:03:21.123 [730/743] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:03:21.380 [731/743] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:03:21.380 [732/743] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:03:21.380 [733/743] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.380 [734/743] Linking target lib/librte_pipeline.so.23.0 00:03:21.638 [735/743] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:03:21.638 [736/743] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:03:21.638 [737/743] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:03:21.896 [738/743] Linking target app/dpdk-test-sad 00:03:22.155 [739/743] Linking target app/dpdk-test-regex 00:03:22.413 [740/743] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:03:22.413 [741/743] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:03:22.673 [742/743] Linking target app/dpdk-test-security-perf 00:03:22.673 [743/743] Linking target app/dpdk-testpmd 00:03:22.673 07:49:28 -- common/autobuild_common.sh@187 -- $ ninja -C /home/vagrant/spdk_repo/dpdk/build-tmp -j10 install 00:03:22.933 ninja: Entering directory `/home/vagrant/spdk_repo/dpdk/build-tmp' 00:03:22.933 [0/1] Installing files. 00:03:23.196 Installing subdir /home/vagrant/spdk_repo/dpdk/examples to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/bbdev_app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bbdev_app 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/bond/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bond 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/README to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/dummy.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t1.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t2.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/bpf/t3.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/bpf 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/cmdline/parse_obj_list.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/cmdline 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/common/pkt_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/common/altivec/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/altivec 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/common/neon/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/neon 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/common/sse/port_group.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/common/sse 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/distributor/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/distributor 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/dma/dmafwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/dma 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/ethapp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/ethtool-app/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/ethtool/lib/rte_ethtool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ethtool/lib 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.196 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_dev_self_test.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_aes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ccm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_cmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_gcm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_hmac.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_rsa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_sha.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_tdes.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/fips_validation_xts.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/fips_validation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/fips_validation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/flow_classify.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_classify/ipv4_rules_file.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_classify 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/flow_blocks.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/flow_filtering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/flow_filtering 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/helloworld/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/helloworld 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_fragmentation/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_fragmentation 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/action.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/kni.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/link.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/mempool.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/pipeline.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/swq.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tap.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/tmgr.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/firewall.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/kni.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/rss.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_pipeline/examples/tap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ip_reassembly/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ip_reassembly 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep0.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ep1.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/esp.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/event_helper.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/flow.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipip.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_process.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/ipsec_worker.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/parser.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/rt.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sa.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.197 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sad.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp4.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/sp6.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/linux_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/load_env.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/pkttest.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/run_test.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/ipv4_multicast/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ipv4_multicast 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/cat.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-cat 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/l2fwd_poll.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-event/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-event 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-jobstats/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/shm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l2fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l2fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-graph/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-graph 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd-power/perf_core.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd-power 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/em_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_generic.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_fib.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.198 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_neon.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_route.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/l3fwd_sse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v4.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_default_v6.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/lpm_route_parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/l3fwd/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/l3fwd 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/link_status_interrupt/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/link_status_interrupt 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/client_server_mp/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/hotplug_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/simple_mp/mp_commands.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/multi_process/symmetric_mp/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/ntb/ntb_fwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ntb 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/packet_ordering/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/packet_ordering 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/conn.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/obj.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/thread.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/ethdev.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/fib_routing_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/hash_func.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/learner.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/meter.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/mirroring.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/packet.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/pcap.io to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/recirculation.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/registers.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/selector.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/varbit.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan.spec to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.py to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/pipeline/examples/vxlan_table.txt to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/pipeline/examples 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/ptpclient/ptpclient.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/ptpclient 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_meter/rte_policer.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_meter 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/app_thread.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.199 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cfg_file.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/cmdline.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_ov.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_pie.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/profile_red.cfg to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/qos_sched/stats.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/qos_sched 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/rxtx_callbacks/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/node/node.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/node 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/args.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/init.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/server/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/server 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/server_node_efd/shared/common.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/service_cores/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/service_cores 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/skeleton/basicfwd.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/skeleton 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/timer/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/timer 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vdpa/vdpa_blk_compact.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vdpa 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/main.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost/virtio_net.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/blk_spec.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_blk/vhost_blk_compat.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_blk 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vhost_crypto/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vhost_crypto 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/channel_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/power_manager.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/vm_power_cli.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/parse.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/Makefile to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:23.200 Installing /home/vagrant/spdk_repo/dpdk/examples/vmdq_dcb/main.c to /home/vagrant/spdk_repo/dpdk/build/share/dpdk/examples/vmdq_dcb 00:03:23.200 Installing lib/librte_kvargs.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.200 Installing lib/librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.200 Installing lib/librte_telemetry.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.200 Installing lib/librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.200 Installing lib/librte_eal.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.200 Installing lib/librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.200 Installing lib/librte_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.200 Installing lib/librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.201 Installing lib/librte_rcu.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.201 Installing lib/librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.201 Installing lib/librte_mempool.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.201 Installing lib/librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.201 Installing lib/librte_mbuf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.201 Installing lib/librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.201 Installing lib/librte_net.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_meter.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_ethdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_cmdline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_metrics.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_hash.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_timer.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_acl.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_bbdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_bitratestats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_bpf.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_cfgfile.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_compressdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_cryptodev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_distributor.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_efd.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_eventdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_gpudev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_gro.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_gso.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_ip_frag.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_jobstats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_latencystats.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_lpm.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_member.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_pcapng.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_power.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_rawdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_regexdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_dmadev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_rib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_reorder.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_sched.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.463 Installing lib/librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_security.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_stack.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_vhost.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_ipsec.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_fib.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_port.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_pdump.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_table.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_pipeline.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_graph.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_node.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing lib/librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing drivers/librte_bus_pci.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing drivers/librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:23.464 Installing drivers/librte_bus_vdev.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing drivers/librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:23.464 Installing drivers/librte_mempool_ring.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing drivers/librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:23.464 Installing drivers/librte_net_i40e.a to /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.464 Installing drivers/librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0 00:03:23.464 Installing app/dpdk-dumpcap to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing app/dpdk-pdump to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing app/dpdk-proc-info to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing app/dpdk-test-acl to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing app/dpdk-test-bbdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing app/dpdk-test-cmdline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing app/dpdk-test-compress-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing app/dpdk-test-crypto-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing app/dpdk-test-eventdev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing app/dpdk-test-fib to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing app/dpdk-test-flow-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing app/dpdk-test-gpudev to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing app/dpdk-test-pipeline to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing app/dpdk-testpmd to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing app/dpdk-test-regex to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing app/dpdk-test-sad to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing app/dpdk-test-security-perf to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/config/rte_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/kvargs/rte_kvargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/telemetry/rte_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/generic/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include/generic 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cpuflags.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_cycles.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_io.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_memcpy.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_pause.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_prefetch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rtm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_rwlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_spinlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_vect.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_atomic_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_alarm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitmap.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.464 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bitops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_branch_prediction.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_bus.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_class.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_compat.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_debug.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_dev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_devargs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_memconfig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_eal_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_errno.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_epoll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_fbarray.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hexdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_hypervisor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_interrupts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_keepalive.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_launch.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_log.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_malloc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_mcslock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memory.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_memzone.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pci_dev_features.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_per_lcore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_pflock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_random.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_reciprocal.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqcount.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_seqlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_service_component.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_string_fns.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_tailq.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_thread.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_ticketlock.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_time.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_trace_point_register.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_uuid.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_version.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/include/rte_vfio.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/eal/linux/include/rte_os.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_c11_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_generic_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_peek_zc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/rcu/rte_rcu_qsbr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/mempool/rte_mempool_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_ptype.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/mbuf/rte_mbuf_dyn.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ip.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_tcp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_udp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.465 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_esp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_sctp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_icmp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_arp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ether.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_macsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_vxlan.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gre.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_gtp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_net_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_mpls.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_higig.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ecpri.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_geneve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_l2tpv2.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/net/rte_ppp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/meter/rte_meter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_cman.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_dev_info.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_flow_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_mtr_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_tm_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_ethdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/ethdev/rte_eth_ctrl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/pci/rte_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_num.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_string.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_rdline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_vt100.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_socket.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_cirbuf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cmdline/cmdline_parse_portlist.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/metrics/rte_metrics_telemetry.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_fbk_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash_crc.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_jhash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_sw.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_crc_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/hash/rte_thash_x86_gfni.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/timer/rte_timer.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/acl/rte_acl_osdep.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/bbdev/rte_bbdev_op.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/bitratestats/rte_bitrate.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/bpf_def.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/bpf/rte_bpf_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cfgfile/rte_cfgfile.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_compressdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/compressdev/rte_comp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.466 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_sym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_crypto_asym.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/cryptodev/rte_cryptodev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/distributor/rte_distributor.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/efd/rte_efd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_event_timer_adapter.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/eventdev/rte_eventdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/gpudev/rte_gpudev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/gro/rte_gro.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/gso/rte_gso.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/ip_frag/rte_ip_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/jobstats/rte_jobstats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/latencystats/rte_latencystats.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_altivec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_neon.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_scalar.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sse.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/lpm/rte_lpm_sve.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/member/rte_member.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/pcapng/rte_pcapng.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_empty_poll.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_intel_uncore.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_pmd_mgmt.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/power/rte_power_guest_channel.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/rawdev/rte_rawdev_pmd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/regexdev/rte_regexdev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/dmadev/rte_dmadev_core.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/rib/rte_rib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/reorder/rte_reorder.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_approx.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_red.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_sched_common.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/sched/rte_pie.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/security/rte_security_driver.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_std.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_generic.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_c11.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/stack/rte_stack_lf_stubs.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vdpa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_async.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/vhost/rte_vhost_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sa.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_sad.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/ipsec/rte_ipsec_group.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/fib/rte_fib6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_frag.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ras.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sched.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.467 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_sym_crypto.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_port_eventdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ethdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_fd.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_ring.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/port/rte_swx_port_source_sink.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/pdump/rte_pdump.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_em.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_learner.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_selector.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_swx_table_wm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_acl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_array.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_cuckoo.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_lpm_ipv6.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_stub.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_lru_x86.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/table/rte_table_hash_func_arm64.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_port_in_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_table_action.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_pipeline.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_extern.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/pipeline/rte_swx_ctl.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/graph/rte_graph_worker.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_ip4_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/lib/node/rte_node_eth_api.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/pci/rte_bus_pci.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-devbind.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-pmdinfo.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-telemetry.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/usertools/dpdk-hugepages.py to /home/vagrant/spdk_repo/dpdk/build/bin 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/rte_build_config.h to /home/vagrant/spdk_repo/dpdk/build/include 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:23.468 Installing /home/vagrant/spdk_repo/dpdk/build-tmp/meson-private/libdpdk.pc to /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig 00:03:23.468 Installing symlink pointing to librte_kvargs.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so.23 00:03:23.468 Installing symlink pointing to librte_kvargs.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_kvargs.so 00:03:23.468 Installing symlink pointing to librte_telemetry.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so.23 00:03:23.468 Installing symlink pointing to librte_telemetry.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_telemetry.so 00:03:23.468 Installing symlink pointing to librte_eal.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so.23 00:03:23.468 Installing symlink pointing to librte_eal.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eal.so 00:03:23.468 Installing symlink pointing to librte_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so.23 00:03:23.468 Installing symlink pointing to librte_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ring.so 00:03:23.468 Installing symlink pointing to librte_rcu.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so.23 00:03:23.468 Installing symlink pointing to librte_rcu.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rcu.so 00:03:23.468 Installing symlink pointing to librte_mempool.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so.23 00:03:23.468 Installing symlink pointing to librte_mempool.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mempool.so 00:03:23.468 Installing symlink pointing to librte_mbuf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so.23 00:03:23.468 Installing symlink pointing to librte_mbuf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_mbuf.so 00:03:23.468 Installing symlink pointing to librte_net.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so.23 00:03:23.468 Installing symlink pointing to librte_net.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_net.so 00:03:23.468 Installing symlink pointing to librte_meter.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so.23 00:03:23.468 Installing symlink pointing to librte_meter.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_meter.so 00:03:23.468 Installing symlink pointing to librte_ethdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so.23 00:03:23.468 Installing symlink pointing to librte_ethdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ethdev.so 00:03:23.468 Installing symlink pointing to librte_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so.23 00:03:23.468 Installing symlink pointing to librte_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pci.so 00:03:23.468 Installing symlink pointing to librte_cmdline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so.23 00:03:23.468 Installing symlink pointing to librte_cmdline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cmdline.so 00:03:23.468 Installing symlink pointing to librte_metrics.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so.23 00:03:23.468 Installing symlink pointing to librte_metrics.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_metrics.so 00:03:23.468 Installing symlink pointing to librte_hash.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so.23 00:03:23.468 Installing symlink pointing to librte_hash.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_hash.so 00:03:23.468 Installing symlink pointing to librte_timer.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so.23 00:03:23.468 Installing symlink pointing to librte_timer.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_timer.so 00:03:23.469 Installing symlink pointing to librte_acl.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so.23 00:03:23.469 Installing symlink pointing to librte_acl.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_acl.so 00:03:23.469 Installing symlink pointing to librte_bbdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so.23 00:03:23.469 Installing symlink pointing to librte_bbdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bbdev.so 00:03:23.469 Installing symlink pointing to librte_bitratestats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so.23 00:03:23.469 Installing symlink pointing to librte_bitratestats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bitratestats.so 00:03:23.469 Installing symlink pointing to librte_bpf.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so.23 00:03:23.469 Installing symlink pointing to librte_bpf.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_bpf.so 00:03:23.469 Installing symlink pointing to librte_cfgfile.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so.23 00:03:23.469 Installing symlink pointing to librte_cfgfile.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cfgfile.so 00:03:23.469 Installing symlink pointing to librte_compressdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so.23 00:03:23.469 Installing symlink pointing to librte_compressdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_compressdev.so 00:03:23.469 Installing symlink pointing to librte_cryptodev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so.23 00:03:23.469 Installing symlink pointing to librte_cryptodev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_cryptodev.so 00:03:23.469 Installing symlink pointing to librte_distributor.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so.23 00:03:23.469 Installing symlink pointing to librte_distributor.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_distributor.so 00:03:23.469 Installing symlink pointing to librte_efd.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so.23 00:03:23.469 Installing symlink pointing to librte_efd.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_efd.so 00:03:23.469 Installing symlink pointing to librte_eventdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so.23 00:03:23.469 Installing symlink pointing to librte_eventdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_eventdev.so 00:03:23.469 Installing symlink pointing to librte_gpudev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so.23 00:03:23.469 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:03:23.469 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:03:23.469 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:03:23.469 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:03:23.469 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:03:23.469 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:03:23.469 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:03:23.469 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:03:23.469 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:03:23.469 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:03:23.469 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:03:23.469 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:03:23.469 Installing symlink pointing to librte_gpudev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gpudev.so 00:03:23.469 Installing symlink pointing to librte_gro.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so.23 00:03:23.469 Installing symlink pointing to librte_gro.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gro.so 00:03:23.469 Installing symlink pointing to librte_gso.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so.23 00:03:23.469 Installing symlink pointing to librte_gso.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_gso.so 00:03:23.469 Installing symlink pointing to librte_ip_frag.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so.23 00:03:23.469 Installing symlink pointing to librte_ip_frag.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ip_frag.so 00:03:23.469 Installing symlink pointing to librte_jobstats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so.23 00:03:23.469 Installing symlink pointing to librte_jobstats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_jobstats.so 00:03:23.469 Installing symlink pointing to librte_latencystats.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so.23 00:03:23.469 Installing symlink pointing to librte_latencystats.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_latencystats.so 00:03:23.469 Installing symlink pointing to librte_lpm.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so.23 00:03:23.469 Installing symlink pointing to librte_lpm.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_lpm.so 00:03:23.469 Installing symlink pointing to librte_member.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so.23 00:03:23.469 Installing symlink pointing to librte_member.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_member.so 00:03:23.469 Installing symlink pointing to librte_pcapng.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so.23 00:03:23.469 Installing symlink pointing to librte_pcapng.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pcapng.so 00:03:23.469 Installing symlink pointing to librte_power.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so.23 00:03:23.469 Installing symlink pointing to librte_power.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_power.so 00:03:23.469 Installing symlink pointing to librte_rawdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so.23 00:03:23.469 Installing symlink pointing to librte_rawdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rawdev.so 00:03:23.469 Installing symlink pointing to librte_regexdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so.23 00:03:23.469 Installing symlink pointing to librte_regexdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_regexdev.so 00:03:23.469 Installing symlink pointing to librte_dmadev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so.23 00:03:23.469 Installing symlink pointing to librte_dmadev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_dmadev.so 00:03:23.469 Installing symlink pointing to librte_rib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so.23 00:03:23.469 Installing symlink pointing to librte_rib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_rib.so 00:03:23.469 Installing symlink pointing to librte_reorder.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so.23 00:03:23.470 Installing symlink pointing to librte_reorder.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_reorder.so 00:03:23.470 Installing symlink pointing to librte_sched.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so.23 00:03:23.470 Installing symlink pointing to librte_sched.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_sched.so 00:03:23.470 Installing symlink pointing to librte_security.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so.23 00:03:23.470 Installing symlink pointing to librte_security.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_security.so 00:03:23.470 Installing symlink pointing to librte_stack.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so.23 00:03:23.470 Installing symlink pointing to librte_stack.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_stack.so 00:03:23.470 Installing symlink pointing to librte_vhost.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so.23 00:03:23.470 Installing symlink pointing to librte_vhost.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_vhost.so 00:03:23.470 Installing symlink pointing to librte_ipsec.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so.23 00:03:23.470 Installing symlink pointing to librte_ipsec.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_ipsec.so 00:03:23.470 Installing symlink pointing to librte_fib.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so.23 00:03:23.470 Installing symlink pointing to librte_fib.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_fib.so 00:03:23.470 Installing symlink pointing to librte_port.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so.23 00:03:23.470 Installing symlink pointing to librte_port.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_port.so 00:03:23.470 Installing symlink pointing to librte_pdump.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so.23 00:03:23.470 Installing symlink pointing to librte_pdump.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pdump.so 00:03:23.470 Installing symlink pointing to librte_table.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so.23 00:03:23.470 Installing symlink pointing to librte_table.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_table.so 00:03:23.470 Installing symlink pointing to librte_pipeline.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so.23 00:03:23.470 Installing symlink pointing to librte_pipeline.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_pipeline.so 00:03:23.470 Installing symlink pointing to librte_graph.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so.23 00:03:23.470 Installing symlink pointing to librte_graph.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_graph.so 00:03:23.470 Installing symlink pointing to librte_node.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so.23 00:03:23.470 Installing symlink pointing to librte_node.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/librte_node.so 00:03:23.470 Installing symlink pointing to librte_bus_pci.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:03:23.470 Installing symlink pointing to librte_bus_pci.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:03:23.470 Installing symlink pointing to librte_bus_vdev.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:03:23.470 Installing symlink pointing to librte_bus_vdev.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:03:23.470 Installing symlink pointing to librte_mempool_ring.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:03:23.470 Installing symlink pointing to librte_mempool_ring.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:03:23.470 Installing symlink pointing to librte_net_i40e.so.23.0 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:03:23.470 Installing symlink pointing to librte_net_i40e.so.23 to /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:03:23.470 Running custom install script '/bin/sh /home/vagrant/spdk_repo/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:03:23.728 07:49:29 -- common/autobuild_common.sh@189 -- $ uname -s 00:03:23.728 07:49:29 -- common/autobuild_common.sh@189 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:03:23.728 07:49:29 -- common/autobuild_common.sh@200 -- $ cat 00:03:23.728 07:49:29 -- common/autobuild_common.sh@205 -- $ cd /home/vagrant/spdk_repo/spdk 00:03:23.728 00:03:23.728 real 0m51.327s 00:03:23.728 user 6m10.558s 00:03:23.728 sys 0m54.775s 00:03:23.728 07:49:29 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:23.728 07:49:29 -- common/autotest_common.sh@10 -- $ set +x 00:03:23.728 ************************************ 00:03:23.728 END TEST build_native_dpdk 00:03:23.728 ************************************ 00:03:23.728 07:49:29 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:23.728 07:49:29 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:23.728 07:49:29 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:23.728 07:49:29 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:23.728 07:49:29 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:23.728 07:49:29 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:23.728 07:49:29 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:23.728 07:49:29 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build --with-shared 00:03:23.728 Using /home/vagrant/spdk_repo/dpdk/build/lib/pkgconfig for additional libs... 00:03:23.986 DPDK libraries: /home/vagrant/spdk_repo/dpdk/build/lib 00:03:23.986 DPDK includes: //home/vagrant/spdk_repo/dpdk/build/include 00:03:23.986 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:24.243 Using 'verbs' RDMA provider 00:03:40.054 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/isa-l/spdk-isal.log)...done. 00:03:52.258 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:03:52.258 Creating mk/config.mk...done. 00:03:52.259 Creating mk/cc.flags.mk...done. 00:03:52.259 Type 'make' to build. 00:03:52.259 07:49:56 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:03:52.259 07:49:56 -- common/autotest_common.sh@1077 -- $ '[' 3 -le 1 ']' 00:03:52.259 07:49:56 -- common/autotest_common.sh@1083 -- $ xtrace_disable 00:03:52.259 07:49:56 -- common/autotest_common.sh@10 -- $ set +x 00:03:52.259 ************************************ 00:03:52.259 START TEST make 00:03:52.259 ************************************ 00:03:52.259 07:49:57 -- common/autotest_common.sh@1104 -- $ make -j10 00:03:52.259 make[1]: Nothing to be done for 'all'. 00:04:14.217 CC lib/ut/ut.o 00:04:14.217 CC lib/ut_mock/mock.o 00:04:14.217 CC lib/log/log_flags.o 00:04:14.217 CC lib/log/log.o 00:04:14.217 CC lib/log/log_deprecated.o 00:04:14.476 LIB libspdk_ut_mock.a 00:04:14.476 LIB libspdk_ut.a 00:04:14.476 SO libspdk_ut_mock.so.5.0 00:04:14.476 LIB libspdk_log.a 00:04:14.476 SO libspdk_ut.so.1.0 00:04:14.476 SO libspdk_log.so.6.1 00:04:14.476 SYMLINK libspdk_ut_mock.so 00:04:14.735 SYMLINK libspdk_ut.so 00:04:14.735 SYMLINK libspdk_log.so 00:04:14.735 CC lib/ioat/ioat.o 00:04:14.735 CC lib/dma/dma.o 00:04:14.735 CC lib/util/base64.o 00:04:14.735 CXX lib/trace_parser/trace.o 00:04:14.735 CC lib/util/bit_array.o 00:04:14.735 CC lib/util/cpuset.o 00:04:14.735 CC lib/util/crc16.o 00:04:14.735 CC lib/util/crc32.o 00:04:14.735 CC lib/util/crc32c.o 00:04:14.993 CC lib/vfio_user/host/vfio_user_pci.o 00:04:14.993 CC lib/util/crc32_ieee.o 00:04:14.993 CC lib/util/crc64.o 00:04:14.993 CC lib/vfio_user/host/vfio_user.o 00:04:14.993 CC lib/util/dif.o 00:04:14.993 LIB libspdk_dma.a 00:04:14.993 CC lib/util/fd.o 00:04:14.993 CC lib/util/file.o 00:04:14.993 SO libspdk_dma.so.3.0 00:04:14.993 CC lib/util/hexlify.o 00:04:14.993 SYMLINK libspdk_dma.so 00:04:14.993 CC lib/util/iov.o 00:04:14.993 LIB libspdk_ioat.a 00:04:15.252 CC lib/util/math.o 00:04:15.252 CC lib/util/pipe.o 00:04:15.252 SO libspdk_ioat.so.6.0 00:04:15.252 LIB libspdk_vfio_user.a 00:04:15.252 CC lib/util/strerror_tls.o 00:04:15.252 CC lib/util/string.o 00:04:15.252 SO libspdk_vfio_user.so.4.0 00:04:15.252 SYMLINK libspdk_ioat.so 00:04:15.252 CC lib/util/uuid.o 00:04:15.252 CC lib/util/fd_group.o 00:04:15.252 CC lib/util/xor.o 00:04:15.252 SYMLINK libspdk_vfio_user.so 00:04:15.252 CC lib/util/zipf.o 00:04:15.510 LIB libspdk_util.a 00:04:15.768 SO libspdk_util.so.8.0 00:04:15.768 SYMLINK libspdk_util.so 00:04:15.768 LIB libspdk_trace_parser.a 00:04:16.026 SO libspdk_trace_parser.so.4.0 00:04:16.026 CC lib/conf/conf.o 00:04:16.026 CC lib/env_dpdk/env.o 00:04:16.026 CC lib/env_dpdk/memory.o 00:04:16.026 CC lib/rdma/rdma_verbs.o 00:04:16.026 CC lib/json/json_util.o 00:04:16.026 CC lib/vmd/vmd.o 00:04:16.026 CC lib/json/json_parse.o 00:04:16.026 CC lib/rdma/common.o 00:04:16.026 CC lib/idxd/idxd.o 00:04:16.026 SYMLINK libspdk_trace_parser.so 00:04:16.026 CC lib/idxd/idxd_user.o 00:04:16.284 CC lib/vmd/led.o 00:04:16.284 LIB libspdk_conf.a 00:04:16.284 CC lib/json/json_write.o 00:04:16.284 CC lib/idxd/idxd_kernel.o 00:04:16.284 SO libspdk_conf.so.5.0 00:04:16.284 LIB libspdk_rdma.a 00:04:16.284 CC lib/env_dpdk/pci.o 00:04:16.284 SYMLINK libspdk_conf.so 00:04:16.284 CC lib/env_dpdk/init.o 00:04:16.284 SO libspdk_rdma.so.5.0 00:04:16.284 CC lib/env_dpdk/threads.o 00:04:16.284 CC lib/env_dpdk/pci_ioat.o 00:04:16.284 SYMLINK libspdk_rdma.so 00:04:16.284 CC lib/env_dpdk/pci_virtio.o 00:04:16.284 CC lib/env_dpdk/pci_vmd.o 00:04:16.542 CC lib/env_dpdk/pci_idxd.o 00:04:16.542 CC lib/env_dpdk/pci_event.o 00:04:16.542 LIB libspdk_json.a 00:04:16.542 LIB libspdk_idxd.a 00:04:16.542 CC lib/env_dpdk/sigbus_handler.o 00:04:16.542 CC lib/env_dpdk/pci_dpdk.o 00:04:16.542 SO libspdk_json.so.5.1 00:04:16.542 SO libspdk_idxd.so.11.0 00:04:16.542 LIB libspdk_vmd.a 00:04:16.542 SYMLINK libspdk_json.so 00:04:16.542 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:16.542 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:16.542 SYMLINK libspdk_idxd.so 00:04:16.542 SO libspdk_vmd.so.5.0 00:04:16.801 SYMLINK libspdk_vmd.so 00:04:16.801 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:16.801 CC lib/jsonrpc/jsonrpc_server.o 00:04:16.801 CC lib/jsonrpc/jsonrpc_client.o 00:04:16.801 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:17.059 LIB libspdk_jsonrpc.a 00:04:17.059 SO libspdk_jsonrpc.so.5.1 00:04:17.059 SYMLINK libspdk_jsonrpc.so 00:04:17.316 CC lib/rpc/rpc.o 00:04:17.316 LIB libspdk_env_dpdk.a 00:04:17.316 SO libspdk_env_dpdk.so.13.0 00:04:17.572 LIB libspdk_rpc.a 00:04:17.572 SO libspdk_rpc.so.5.0 00:04:17.572 SYMLINK libspdk_env_dpdk.so 00:04:17.572 SYMLINK libspdk_rpc.so 00:04:17.830 CC lib/sock/sock.o 00:04:17.830 CC lib/sock/sock_rpc.o 00:04:17.830 CC lib/trace/trace.o 00:04:17.830 CC lib/trace/trace_flags.o 00:04:17.830 CC lib/trace/trace_rpc.o 00:04:17.830 CC lib/notify/notify.o 00:04:17.830 CC lib/notify/notify_rpc.o 00:04:18.088 LIB libspdk_notify.a 00:04:18.088 SO libspdk_notify.so.5.0 00:04:18.088 LIB libspdk_trace.a 00:04:18.088 SO libspdk_trace.so.9.0 00:04:18.088 SYMLINK libspdk_notify.so 00:04:18.088 SYMLINK libspdk_trace.so 00:04:18.088 LIB libspdk_sock.a 00:04:18.346 SO libspdk_sock.so.8.0 00:04:18.346 CC lib/thread/thread.o 00:04:18.346 CC lib/thread/iobuf.o 00:04:18.346 SYMLINK libspdk_sock.so 00:04:18.604 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:18.604 CC lib/nvme/nvme_ctrlr.o 00:04:18.604 CC lib/nvme/nvme_fabric.o 00:04:18.604 CC lib/nvme/nvme_ns_cmd.o 00:04:18.604 CC lib/nvme/nvme_ns.o 00:04:18.604 CC lib/nvme/nvme_pcie.o 00:04:18.604 CC lib/nvme/nvme_pcie_common.o 00:04:18.604 CC lib/nvme/nvme_qpair.o 00:04:18.604 CC lib/nvme/nvme.o 00:04:19.200 CC lib/nvme/nvme_quirks.o 00:04:19.200 CC lib/nvme/nvme_transport.o 00:04:19.458 CC lib/nvme/nvme_discovery.o 00:04:19.458 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:19.458 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:19.458 CC lib/nvme/nvme_tcp.o 00:04:19.458 CC lib/nvme/nvme_opal.o 00:04:19.716 CC lib/nvme/nvme_io_msg.o 00:04:19.716 LIB libspdk_thread.a 00:04:19.975 SO libspdk_thread.so.9.0 00:04:19.975 CC lib/nvme/nvme_poll_group.o 00:04:19.975 SYMLINK libspdk_thread.so 00:04:19.975 CC lib/nvme/nvme_zns.o 00:04:19.975 CC lib/nvme/nvme_cuse.o 00:04:19.975 CC lib/accel/accel.o 00:04:19.975 CC lib/nvme/nvme_vfio_user.o 00:04:20.233 CC lib/accel/accel_rpc.o 00:04:20.233 CC lib/blob/blobstore.o 00:04:20.233 CC lib/nvme/nvme_rdma.o 00:04:20.491 CC lib/init/json_config.o 00:04:20.491 CC lib/init/subsystem.o 00:04:20.749 CC lib/init/subsystem_rpc.o 00:04:20.749 CC lib/blob/request.o 00:04:20.749 CC lib/init/rpc.o 00:04:20.749 CC lib/blob/zeroes.o 00:04:20.749 CC lib/accel/accel_sw.o 00:04:20.749 CC lib/blob/blob_bs_dev.o 00:04:20.749 LIB libspdk_init.a 00:04:21.007 SO libspdk_init.so.4.0 00:04:21.007 SYMLINK libspdk_init.so 00:04:21.007 CC lib/virtio/virtio.o 00:04:21.007 CC lib/virtio/virtio_vhost_user.o 00:04:21.007 CC lib/virtio/virtio_vfio_user.o 00:04:21.007 CC lib/virtio/virtio_pci.o 00:04:21.007 LIB libspdk_accel.a 00:04:21.007 CC lib/event/reactor.o 00:04:21.007 CC lib/event/app.o 00:04:21.007 CC lib/event/log_rpc.o 00:04:21.007 SO libspdk_accel.so.14.0 00:04:21.265 SYMLINK libspdk_accel.so 00:04:21.265 CC lib/event/app_rpc.o 00:04:21.265 CC lib/event/scheduler_static.o 00:04:21.265 LIB libspdk_virtio.a 00:04:21.265 CC lib/bdev/bdev.o 00:04:21.265 SO libspdk_virtio.so.6.0 00:04:21.265 CC lib/bdev/bdev_rpc.o 00:04:21.265 CC lib/bdev/bdev_zone.o 00:04:21.522 CC lib/bdev/part.o 00:04:21.522 CC lib/bdev/scsi_nvme.o 00:04:21.522 SYMLINK libspdk_virtio.so 00:04:21.522 LIB libspdk_event.a 00:04:21.522 SO libspdk_event.so.12.0 00:04:21.522 LIB libspdk_nvme.a 00:04:21.522 SYMLINK libspdk_event.so 00:04:21.780 SO libspdk_nvme.so.12.0 00:04:22.039 SYMLINK libspdk_nvme.so 00:04:22.974 LIB libspdk_blob.a 00:04:23.232 SO libspdk_blob.so.10.1 00:04:23.232 SYMLINK libspdk_blob.so 00:04:23.490 CC lib/blobfs/blobfs.o 00:04:23.490 CC lib/blobfs/tree.o 00:04:23.490 CC lib/lvol/lvol.o 00:04:24.069 LIB libspdk_bdev.a 00:04:24.069 SO libspdk_bdev.so.14.0 00:04:24.069 SYMLINK libspdk_bdev.so 00:04:24.353 LIB libspdk_blobfs.a 00:04:24.353 LIB libspdk_lvol.a 00:04:24.353 CC lib/scsi/lun.o 00:04:24.353 CC lib/nvmf/ctrlr.o 00:04:24.353 CC lib/scsi/dev.o 00:04:24.353 SO libspdk_blobfs.so.9.0 00:04:24.353 CC lib/ftl/ftl_core.o 00:04:24.353 CC lib/scsi/port.o 00:04:24.353 CC lib/nvmf/ctrlr_discovery.o 00:04:24.353 CC lib/ublk/ublk.o 00:04:24.353 CC lib/nbd/nbd.o 00:04:24.353 SO libspdk_lvol.so.9.1 00:04:24.353 SYMLINK libspdk_blobfs.so 00:04:24.353 CC lib/ftl/ftl_init.o 00:04:24.353 SYMLINK libspdk_lvol.so 00:04:24.353 CC lib/ftl/ftl_layout.o 00:04:24.611 CC lib/ftl/ftl_debug.o 00:04:24.611 CC lib/scsi/scsi.o 00:04:24.611 CC lib/scsi/scsi_bdev.o 00:04:24.611 CC lib/ublk/ublk_rpc.o 00:04:24.611 CC lib/scsi/scsi_pr.o 00:04:24.611 CC lib/nbd/nbd_rpc.o 00:04:24.611 CC lib/nvmf/ctrlr_bdev.o 00:04:24.611 CC lib/nvmf/subsystem.o 00:04:24.611 CC lib/ftl/ftl_io.o 00:04:24.869 CC lib/ftl/ftl_sb.o 00:04:24.869 CC lib/nvmf/nvmf.o 00:04:24.869 LIB libspdk_nbd.a 00:04:24.869 LIB libspdk_ublk.a 00:04:24.869 SO libspdk_nbd.so.6.0 00:04:24.869 SO libspdk_ublk.so.2.0 00:04:24.870 SYMLINK libspdk_nbd.so 00:04:24.870 CC lib/ftl/ftl_l2p.o 00:04:24.870 CC lib/ftl/ftl_l2p_flat.o 00:04:24.870 CC lib/ftl/ftl_nv_cache.o 00:04:25.128 CC lib/ftl/ftl_band.o 00:04:25.128 SYMLINK libspdk_ublk.so 00:04:25.128 CC lib/ftl/ftl_band_ops.o 00:04:25.128 CC lib/scsi/scsi_rpc.o 00:04:25.128 CC lib/scsi/task.o 00:04:25.128 CC lib/ftl/ftl_writer.o 00:04:25.128 CC lib/ftl/ftl_rq.o 00:04:25.387 CC lib/nvmf/nvmf_rpc.o 00:04:25.387 CC lib/ftl/ftl_reloc.o 00:04:25.387 CC lib/ftl/ftl_l2p_cache.o 00:04:25.387 CC lib/ftl/ftl_p2l.o 00:04:25.387 LIB libspdk_scsi.a 00:04:25.645 SO libspdk_scsi.so.8.0 00:04:25.645 CC lib/ftl/mngt/ftl_mngt.o 00:04:25.645 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:25.645 SYMLINK libspdk_scsi.so 00:04:25.645 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:25.903 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:25.903 CC lib/iscsi/conn.o 00:04:25.903 CC lib/nvmf/transport.o 00:04:25.903 CC lib/iscsi/init_grp.o 00:04:25.903 CC lib/nvmf/tcp.o 00:04:25.903 CC lib/vhost/vhost.o 00:04:25.903 CC lib/vhost/vhost_rpc.o 00:04:25.903 CC lib/vhost/vhost_scsi.o 00:04:25.903 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:26.160 CC lib/vhost/vhost_blk.o 00:04:26.160 CC lib/vhost/rte_vhost_user.o 00:04:26.160 CC lib/nvmf/rdma.o 00:04:26.417 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:26.417 CC lib/iscsi/iscsi.o 00:04:26.417 CC lib/iscsi/md5.o 00:04:26.674 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:26.674 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:26.674 CC lib/iscsi/param.o 00:04:26.674 CC lib/iscsi/portal_grp.o 00:04:26.932 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:26.932 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:26.932 CC lib/iscsi/tgt_node.o 00:04:26.932 CC lib/iscsi/iscsi_subsystem.o 00:04:26.932 CC lib/iscsi/iscsi_rpc.o 00:04:26.932 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:26.932 CC lib/iscsi/task.o 00:04:27.189 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:27.189 LIB libspdk_vhost.a 00:04:27.189 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:27.189 CC lib/ftl/utils/ftl_conf.o 00:04:27.189 SO libspdk_vhost.so.7.1 00:04:27.189 CC lib/ftl/utils/ftl_md.o 00:04:27.445 CC lib/ftl/utils/ftl_mempool.o 00:04:27.445 SYMLINK libspdk_vhost.so 00:04:27.445 CC lib/ftl/utils/ftl_bitmap.o 00:04:27.445 CC lib/ftl/utils/ftl_property.o 00:04:27.445 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:27.446 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:27.446 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:27.446 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:27.446 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:27.446 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:27.704 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:27.704 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:27.704 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:27.704 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:27.704 CC lib/ftl/base/ftl_base_dev.o 00:04:27.704 CC lib/ftl/base/ftl_base_bdev.o 00:04:27.704 CC lib/ftl/ftl_trace.o 00:04:27.704 LIB libspdk_iscsi.a 00:04:27.963 SO libspdk_iscsi.so.7.0 00:04:27.963 LIB libspdk_ftl.a 00:04:27.963 SYMLINK libspdk_iscsi.so 00:04:28.220 LIB libspdk_nvmf.a 00:04:28.220 SO libspdk_ftl.so.8.0 00:04:28.220 SO libspdk_nvmf.so.17.0 00:04:28.478 SYMLINK libspdk_nvmf.so 00:04:28.478 SYMLINK libspdk_ftl.so 00:04:28.735 CC module/env_dpdk/env_dpdk_rpc.o 00:04:28.992 CC module/accel/dsa/accel_dsa.o 00:04:28.992 CC module/sock/posix/posix.o 00:04:28.992 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:28.992 CC module/accel/iaa/accel_iaa.o 00:04:28.992 CC module/sock/uring/uring.o 00:04:28.992 CC module/accel/error/accel_error.o 00:04:28.992 CC module/accel/ioat/accel_ioat.o 00:04:28.992 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:28.992 CC module/blob/bdev/blob_bdev.o 00:04:28.992 LIB libspdk_env_dpdk_rpc.a 00:04:28.992 SO libspdk_env_dpdk_rpc.so.5.0 00:04:28.992 LIB libspdk_scheduler_dpdk_governor.a 00:04:28.992 SYMLINK libspdk_env_dpdk_rpc.so 00:04:28.992 CC module/accel/error/accel_error_rpc.o 00:04:28.992 SO libspdk_scheduler_dpdk_governor.so.3.0 00:04:28.992 CC module/accel/dsa/accel_dsa_rpc.o 00:04:28.992 CC module/accel/ioat/accel_ioat_rpc.o 00:04:28.992 LIB libspdk_scheduler_dynamic.a 00:04:28.992 CC module/accel/iaa/accel_iaa_rpc.o 00:04:28.992 SO libspdk_scheduler_dynamic.so.3.0 00:04:29.250 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:29.250 LIB libspdk_blob_bdev.a 00:04:29.250 SYMLINK libspdk_scheduler_dynamic.so 00:04:29.250 SO libspdk_blob_bdev.so.10.1 00:04:29.250 LIB libspdk_accel_dsa.a 00:04:29.250 LIB libspdk_accel_ioat.a 00:04:29.250 CC module/scheduler/gscheduler/gscheduler.o 00:04:29.250 LIB libspdk_accel_iaa.a 00:04:29.250 SYMLINK libspdk_blob_bdev.so 00:04:29.250 SO libspdk_accel_ioat.so.5.0 00:04:29.250 SO libspdk_accel_dsa.so.4.0 00:04:29.250 SO libspdk_accel_iaa.so.2.0 00:04:29.250 LIB libspdk_accel_error.a 00:04:29.250 SYMLINK libspdk_accel_ioat.so 00:04:29.250 SYMLINK libspdk_accel_dsa.so 00:04:29.250 SYMLINK libspdk_accel_iaa.so 00:04:29.250 SO libspdk_accel_error.so.1.0 00:04:29.507 LIB libspdk_scheduler_gscheduler.a 00:04:29.507 SO libspdk_scheduler_gscheduler.so.3.0 00:04:29.507 CC module/bdev/delay/vbdev_delay.o 00:04:29.507 SYMLINK libspdk_accel_error.so 00:04:29.507 CC module/bdev/error/vbdev_error.o 00:04:29.507 CC module/blobfs/bdev/blobfs_bdev.o 00:04:29.507 CC module/bdev/gpt/gpt.o 00:04:29.507 CC module/bdev/lvol/vbdev_lvol.o 00:04:29.507 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:29.507 CC module/bdev/malloc/bdev_malloc.o 00:04:29.507 SYMLINK libspdk_scheduler_gscheduler.so 00:04:29.507 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:29.507 LIB libspdk_sock_uring.a 00:04:29.507 SO libspdk_sock_uring.so.4.0 00:04:29.507 LIB libspdk_sock_posix.a 00:04:29.507 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:29.765 SO libspdk_sock_posix.so.5.0 00:04:29.765 CC module/bdev/gpt/vbdev_gpt.o 00:04:29.765 SYMLINK libspdk_sock_uring.so 00:04:29.765 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:29.765 CC module/bdev/error/vbdev_error_rpc.o 00:04:29.765 SYMLINK libspdk_sock_posix.so 00:04:29.765 CC module/bdev/null/bdev_null.o 00:04:29.765 LIB libspdk_bdev_delay.a 00:04:29.765 LIB libspdk_blobfs_bdev.a 00:04:29.765 CC module/bdev/nvme/bdev_nvme.o 00:04:29.765 CC module/bdev/passthru/vbdev_passthru.o 00:04:29.765 LIB libspdk_bdev_malloc.a 00:04:29.765 SO libspdk_bdev_delay.so.5.0 00:04:29.765 SO libspdk_blobfs_bdev.so.5.0 00:04:29.765 SO libspdk_bdev_malloc.so.5.0 00:04:29.765 LIB libspdk_bdev_error.a 00:04:30.022 SYMLINK libspdk_blobfs_bdev.so 00:04:30.022 SYMLINK libspdk_bdev_delay.so 00:04:30.022 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:30.022 SO libspdk_bdev_error.so.5.0 00:04:30.022 CC module/bdev/null/bdev_null_rpc.o 00:04:30.022 SYMLINK libspdk_bdev_malloc.so 00:04:30.022 LIB libspdk_bdev_gpt.a 00:04:30.022 LIB libspdk_bdev_lvol.a 00:04:30.022 SO libspdk_bdev_gpt.so.5.0 00:04:30.022 SYMLINK libspdk_bdev_error.so 00:04:30.022 SO libspdk_bdev_lvol.so.5.0 00:04:30.022 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:30.022 SYMLINK libspdk_bdev_gpt.so 00:04:30.022 CC module/bdev/split/vbdev_split.o 00:04:30.022 CC module/bdev/raid/bdev_raid.o 00:04:30.022 SYMLINK libspdk_bdev_lvol.so 00:04:30.022 CC module/bdev/raid/bdev_raid_rpc.o 00:04:30.022 LIB libspdk_bdev_passthru.a 00:04:30.279 LIB libspdk_bdev_null.a 00:04:30.279 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:30.279 SO libspdk_bdev_passthru.so.5.0 00:04:30.279 SO libspdk_bdev_null.so.5.0 00:04:30.279 CC module/bdev/uring/bdev_uring.o 00:04:30.279 SYMLINK libspdk_bdev_passthru.so 00:04:30.279 CC module/bdev/uring/bdev_uring_rpc.o 00:04:30.279 SYMLINK libspdk_bdev_null.so 00:04:30.279 CC module/bdev/aio/bdev_aio.o 00:04:30.280 CC module/bdev/split/vbdev_split_rpc.o 00:04:30.280 CC module/bdev/aio/bdev_aio_rpc.o 00:04:30.537 CC module/bdev/nvme/nvme_rpc.o 00:04:30.537 LIB libspdk_bdev_split.a 00:04:30.537 CC module/bdev/ftl/bdev_ftl.o 00:04:30.537 SO libspdk_bdev_split.so.5.0 00:04:30.537 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:30.537 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:30.537 SYMLINK libspdk_bdev_split.so 00:04:30.537 LIB libspdk_bdev_uring.a 00:04:30.537 LIB libspdk_bdev_aio.a 00:04:30.537 SO libspdk_bdev_uring.so.5.0 00:04:30.537 SO libspdk_bdev_aio.so.5.0 00:04:30.795 CC module/bdev/iscsi/bdev_iscsi.o 00:04:30.795 LIB libspdk_bdev_zone_block.a 00:04:30.795 SYMLINK libspdk_bdev_aio.so 00:04:30.795 CC module/bdev/nvme/bdev_mdns_client.o 00:04:30.795 SYMLINK libspdk_bdev_uring.so 00:04:30.795 CC module/bdev/nvme/vbdev_opal.o 00:04:30.795 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:30.795 SO libspdk_bdev_zone_block.so.5.0 00:04:30.795 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:30.795 LIB libspdk_bdev_ftl.a 00:04:30.795 SYMLINK libspdk_bdev_zone_block.so 00:04:30.795 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:30.795 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:30.795 SO libspdk_bdev_ftl.so.5.0 00:04:30.795 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:30.795 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:30.795 SYMLINK libspdk_bdev_ftl.so 00:04:30.795 CC module/bdev/raid/bdev_raid_sb.o 00:04:31.053 CC module/bdev/raid/raid0.o 00:04:31.053 CC module/bdev/raid/raid1.o 00:04:31.053 CC module/bdev/raid/concat.o 00:04:31.053 LIB libspdk_bdev_iscsi.a 00:04:31.053 SO libspdk_bdev_iscsi.so.5.0 00:04:31.053 SYMLINK libspdk_bdev_iscsi.so 00:04:31.310 LIB libspdk_bdev_raid.a 00:04:31.310 SO libspdk_bdev_raid.so.5.0 00:04:31.310 LIB libspdk_bdev_virtio.a 00:04:31.310 SYMLINK libspdk_bdev_raid.so 00:04:31.310 SO libspdk_bdev_virtio.so.5.0 00:04:31.568 SYMLINK libspdk_bdev_virtio.so 00:04:32.133 LIB libspdk_bdev_nvme.a 00:04:32.134 SO libspdk_bdev_nvme.so.6.0 00:04:32.134 SYMLINK libspdk_bdev_nvme.so 00:04:32.699 CC module/event/subsystems/sock/sock.o 00:04:32.699 CC module/event/subsystems/vmd/vmd.o 00:04:32.699 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:32.699 CC module/event/subsystems/iobuf/iobuf.o 00:04:32.699 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:32.699 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:32.699 CC module/event/subsystems/scheduler/scheduler.o 00:04:32.699 LIB libspdk_event_sock.a 00:04:32.699 LIB libspdk_event_vhost_blk.a 00:04:32.699 LIB libspdk_event_iobuf.a 00:04:32.699 SO libspdk_event_sock.so.4.0 00:04:32.699 LIB libspdk_event_vmd.a 00:04:32.699 SO libspdk_event_vhost_blk.so.2.0 00:04:32.699 LIB libspdk_event_scheduler.a 00:04:32.699 SO libspdk_event_iobuf.so.2.0 00:04:32.699 SO libspdk_event_scheduler.so.3.0 00:04:32.699 SO libspdk_event_vmd.so.5.0 00:04:32.699 SYMLINK libspdk_event_sock.so 00:04:32.699 SYMLINK libspdk_event_vhost_blk.so 00:04:32.699 SYMLINK libspdk_event_vmd.so 00:04:32.699 SYMLINK libspdk_event_iobuf.so 00:04:32.699 SYMLINK libspdk_event_scheduler.so 00:04:32.956 CC module/event/subsystems/accel/accel.o 00:04:33.215 LIB libspdk_event_accel.a 00:04:33.215 SO libspdk_event_accel.so.5.0 00:04:33.215 SYMLINK libspdk_event_accel.so 00:04:33.473 CC module/event/subsystems/bdev/bdev.o 00:04:33.731 LIB libspdk_event_bdev.a 00:04:33.731 SO libspdk_event_bdev.so.5.0 00:04:33.731 SYMLINK libspdk_event_bdev.so 00:04:33.989 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:33.989 CC module/event/subsystems/ublk/ublk.o 00:04:33.989 CC module/event/subsystems/nbd/nbd.o 00:04:33.989 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:33.989 CC module/event/subsystems/scsi/scsi.o 00:04:33.989 LIB libspdk_event_ublk.a 00:04:33.989 LIB libspdk_event_nbd.a 00:04:33.989 LIB libspdk_event_scsi.a 00:04:33.989 SO libspdk_event_ublk.so.2.0 00:04:33.990 SO libspdk_event_nbd.so.5.0 00:04:33.990 SO libspdk_event_scsi.so.5.0 00:04:34.247 LIB libspdk_event_nvmf.a 00:04:34.247 SYMLINK libspdk_event_nbd.so 00:04:34.247 SYMLINK libspdk_event_ublk.so 00:04:34.247 SYMLINK libspdk_event_scsi.so 00:04:34.247 SO libspdk_event_nvmf.so.5.0 00:04:34.247 SYMLINK libspdk_event_nvmf.so 00:04:34.247 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:34.247 CC module/event/subsystems/iscsi/iscsi.o 00:04:34.505 LIB libspdk_event_vhost_scsi.a 00:04:34.505 LIB libspdk_event_iscsi.a 00:04:34.505 SO libspdk_event_vhost_scsi.so.2.0 00:04:34.505 SO libspdk_event_iscsi.so.5.0 00:04:34.505 SYMLINK libspdk_event_vhost_scsi.so 00:04:34.506 SYMLINK libspdk_event_iscsi.so 00:04:34.764 SO libspdk.so.5.0 00:04:34.764 SYMLINK libspdk.so 00:04:34.764 CXX app/trace/trace.o 00:04:34.764 CC app/trace_record/trace_record.o 00:04:35.022 CC app/iscsi_tgt/iscsi_tgt.o 00:04:35.022 CC examples/ioat/perf/perf.o 00:04:35.022 CC examples/accel/perf/accel_perf.o 00:04:35.022 CC app/nvmf_tgt/nvmf_main.o 00:04:35.022 CC examples/bdev/hello_world/hello_bdev.o 00:04:35.022 CC test/app/bdev_svc/bdev_svc.o 00:04:35.022 CC test/accel/dif/dif.o 00:04:35.022 CC examples/blob/hello_world/hello_blob.o 00:04:35.280 LINK nvmf_tgt 00:04:35.280 LINK spdk_trace_record 00:04:35.280 LINK bdev_svc 00:04:35.280 LINK ioat_perf 00:04:35.280 LINK iscsi_tgt 00:04:35.280 LINK hello_blob 00:04:35.280 LINK hello_bdev 00:04:35.280 LINK spdk_trace 00:04:35.280 LINK dif 00:04:35.538 LINK accel_perf 00:04:35.538 CC examples/ioat/verify/verify.o 00:04:35.538 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:35.538 CC examples/nvme/hello_world/hello_world.o 00:04:35.538 CC examples/sock/hello_world/hello_sock.o 00:04:35.539 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:35.539 CC examples/blob/cli/blobcli.o 00:04:35.539 CC examples/bdev/bdevperf/bdevperf.o 00:04:35.539 CC app/spdk_tgt/spdk_tgt.o 00:04:35.539 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:35.797 LINK verify 00:04:35.797 CC examples/nvme/reconnect/reconnect.o 00:04:35.797 LINK hello_world 00:04:35.797 LINK hello_sock 00:04:35.797 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:35.797 LINK spdk_tgt 00:04:35.797 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:35.797 LINK nvme_fuzz 00:04:36.056 CC examples/nvme/arbitration/arbitration.o 00:04:36.056 LINK reconnect 00:04:36.056 CC examples/vmd/lsvmd/lsvmd.o 00:04:36.056 LINK blobcli 00:04:36.056 CC examples/vmd/led/led.o 00:04:36.056 CC app/spdk_lspci/spdk_lspci.o 00:04:36.315 LINK lsvmd 00:04:36.315 LINK vhost_fuzz 00:04:36.315 CC examples/nvme/hotplug/hotplug.o 00:04:36.315 LINK led 00:04:36.315 LINK spdk_lspci 00:04:36.315 LINK arbitration 00:04:36.315 LINK nvme_manage 00:04:36.315 LINK bdevperf 00:04:36.315 CC app/spdk_nvme_perf/perf.o 00:04:36.315 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:36.315 CC examples/nvme/abort/abort.o 00:04:36.573 LINK hotplug 00:04:36.573 CC app/spdk_nvme_identify/identify.o 00:04:36.573 CC examples/util/zipf/zipf.o 00:04:36.573 CC examples/nvmf/nvmf/nvmf.o 00:04:36.573 LINK cmb_copy 00:04:36.573 CC examples/thread/thread/thread_ex.o 00:04:36.831 CC examples/idxd/perf/perf.o 00:04:36.831 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:36.831 LINK zipf 00:04:36.831 LINK abort 00:04:36.831 CC app/spdk_nvme_discover/discovery_aer.o 00:04:36.831 LINK interrupt_tgt 00:04:36.831 LINK nvmf 00:04:37.089 CC test/app/histogram_perf/histogram_perf.o 00:04:37.089 LINK thread 00:04:37.089 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:37.089 LINK idxd_perf 00:04:37.089 LINK spdk_nvme_discover 00:04:37.089 LINK iscsi_fuzz 00:04:37.089 LINK histogram_perf 00:04:37.089 CC test/app/jsoncat/jsoncat.o 00:04:37.089 LINK spdk_nvme_perf 00:04:37.089 LINK pmr_persistence 00:04:37.347 CC app/spdk_top/spdk_top.o 00:04:37.347 LINK spdk_nvme_identify 00:04:37.347 CC test/app/stub/stub.o 00:04:37.347 CC app/spdk_dd/spdk_dd.o 00:04:37.347 CC app/vhost/vhost.o 00:04:37.347 LINK jsoncat 00:04:37.347 TEST_HEADER include/spdk/accel.h 00:04:37.347 TEST_HEADER include/spdk/accel_module.h 00:04:37.347 TEST_HEADER include/spdk/assert.h 00:04:37.347 TEST_HEADER include/spdk/barrier.h 00:04:37.605 TEST_HEADER include/spdk/base64.h 00:04:37.605 TEST_HEADER include/spdk/bdev.h 00:04:37.605 TEST_HEADER include/spdk/bdev_module.h 00:04:37.605 TEST_HEADER include/spdk/bdev_zone.h 00:04:37.605 TEST_HEADER include/spdk/bit_array.h 00:04:37.605 TEST_HEADER include/spdk/bit_pool.h 00:04:37.605 TEST_HEADER include/spdk/blob_bdev.h 00:04:37.605 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:37.605 CC test/bdev/bdevio/bdevio.o 00:04:37.605 TEST_HEADER include/spdk/blobfs.h 00:04:37.605 TEST_HEADER include/spdk/blob.h 00:04:37.605 TEST_HEADER include/spdk/conf.h 00:04:37.605 CC app/fio/nvme/fio_plugin.o 00:04:37.605 TEST_HEADER include/spdk/config.h 00:04:37.605 TEST_HEADER include/spdk/cpuset.h 00:04:37.605 TEST_HEADER include/spdk/crc16.h 00:04:37.605 TEST_HEADER include/spdk/crc32.h 00:04:37.605 TEST_HEADER include/spdk/crc64.h 00:04:37.605 TEST_HEADER include/spdk/dif.h 00:04:37.605 TEST_HEADER include/spdk/dma.h 00:04:37.605 TEST_HEADER include/spdk/endian.h 00:04:37.605 LINK stub 00:04:37.605 TEST_HEADER include/spdk/env_dpdk.h 00:04:37.605 TEST_HEADER include/spdk/env.h 00:04:37.605 TEST_HEADER include/spdk/event.h 00:04:37.605 LINK vhost 00:04:37.605 TEST_HEADER include/spdk/fd_group.h 00:04:37.605 CC test/blobfs/mkfs/mkfs.o 00:04:37.605 TEST_HEADER include/spdk/fd.h 00:04:37.605 TEST_HEADER include/spdk/file.h 00:04:37.605 TEST_HEADER include/spdk/ftl.h 00:04:37.605 TEST_HEADER include/spdk/gpt_spec.h 00:04:37.605 TEST_HEADER include/spdk/hexlify.h 00:04:37.605 TEST_HEADER include/spdk/histogram_data.h 00:04:37.605 TEST_HEADER include/spdk/idxd.h 00:04:37.605 TEST_HEADER include/spdk/idxd_spec.h 00:04:37.605 TEST_HEADER include/spdk/init.h 00:04:37.605 TEST_HEADER include/spdk/ioat.h 00:04:37.605 TEST_HEADER include/spdk/ioat_spec.h 00:04:37.605 TEST_HEADER include/spdk/iscsi_spec.h 00:04:37.605 TEST_HEADER include/spdk/json.h 00:04:37.605 TEST_HEADER include/spdk/jsonrpc.h 00:04:37.605 TEST_HEADER include/spdk/likely.h 00:04:37.605 TEST_HEADER include/spdk/log.h 00:04:37.605 TEST_HEADER include/spdk/lvol.h 00:04:37.605 TEST_HEADER include/spdk/memory.h 00:04:37.605 TEST_HEADER include/spdk/mmio.h 00:04:37.605 TEST_HEADER include/spdk/nbd.h 00:04:37.605 TEST_HEADER include/spdk/notify.h 00:04:37.605 TEST_HEADER include/spdk/nvme.h 00:04:37.605 TEST_HEADER include/spdk/nvme_intel.h 00:04:37.605 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:37.605 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:37.605 TEST_HEADER include/spdk/nvme_spec.h 00:04:37.605 TEST_HEADER include/spdk/nvme_zns.h 00:04:37.605 CC test/dma/test_dma/test_dma.o 00:04:37.605 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:37.605 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:37.605 TEST_HEADER include/spdk/nvmf.h 00:04:37.605 TEST_HEADER include/spdk/nvmf_spec.h 00:04:37.605 TEST_HEADER include/spdk/nvmf_transport.h 00:04:37.605 TEST_HEADER include/spdk/opal.h 00:04:37.605 TEST_HEADER include/spdk/opal_spec.h 00:04:37.605 TEST_HEADER include/spdk/pci_ids.h 00:04:37.605 TEST_HEADER include/spdk/pipe.h 00:04:37.605 TEST_HEADER include/spdk/queue.h 00:04:37.605 TEST_HEADER include/spdk/reduce.h 00:04:37.605 TEST_HEADER include/spdk/rpc.h 00:04:37.605 TEST_HEADER include/spdk/scheduler.h 00:04:37.605 TEST_HEADER include/spdk/scsi.h 00:04:37.605 TEST_HEADER include/spdk/scsi_spec.h 00:04:37.605 TEST_HEADER include/spdk/sock.h 00:04:37.605 TEST_HEADER include/spdk/stdinc.h 00:04:37.605 TEST_HEADER include/spdk/string.h 00:04:37.605 TEST_HEADER include/spdk/thread.h 00:04:37.605 TEST_HEADER include/spdk/trace.h 00:04:37.605 TEST_HEADER include/spdk/trace_parser.h 00:04:37.605 TEST_HEADER include/spdk/tree.h 00:04:37.605 TEST_HEADER include/spdk/ublk.h 00:04:37.605 TEST_HEADER include/spdk/util.h 00:04:37.605 TEST_HEADER include/spdk/uuid.h 00:04:37.605 TEST_HEADER include/spdk/version.h 00:04:37.605 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:37.605 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:37.605 TEST_HEADER include/spdk/vhost.h 00:04:37.605 TEST_HEADER include/spdk/vmd.h 00:04:37.605 TEST_HEADER include/spdk/xor.h 00:04:37.605 TEST_HEADER include/spdk/zipf.h 00:04:37.605 CXX test/cpp_headers/accel.o 00:04:37.605 CC test/env/mem_callbacks/mem_callbacks.o 00:04:37.605 CXX test/cpp_headers/accel_module.o 00:04:37.605 LINK spdk_dd 00:04:37.864 LINK mkfs 00:04:37.864 CC app/fio/bdev/fio_plugin.o 00:04:37.864 CXX test/cpp_headers/assert.o 00:04:37.864 LINK mem_callbacks 00:04:37.864 LINK bdevio 00:04:37.864 CC test/env/vtophys/vtophys.o 00:04:37.864 LINK test_dma 00:04:38.122 CC test/event/event_perf/event_perf.o 00:04:38.122 CXX test/cpp_headers/barrier.o 00:04:38.122 LINK spdk_nvme 00:04:38.122 LINK vtophys 00:04:38.122 LINK spdk_top 00:04:38.122 CC test/nvme/aer/aer.o 00:04:38.122 CC test/lvol/esnap/esnap.o 00:04:38.122 CC test/nvme/reset/reset.o 00:04:38.122 LINK event_perf 00:04:38.122 CC test/nvme/sgl/sgl.o 00:04:38.122 CXX test/cpp_headers/base64.o 00:04:38.122 CC test/nvme/e2edp/nvme_dp.o 00:04:38.404 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:38.404 CC test/env/memory/memory_ut.o 00:04:38.404 LINK spdk_bdev 00:04:38.404 CXX test/cpp_headers/bdev.o 00:04:38.404 CC test/event/reactor/reactor.o 00:04:38.404 LINK aer 00:04:38.404 LINK reset 00:04:38.404 LINK env_dpdk_post_init 00:04:38.404 LINK sgl 00:04:38.404 CXX test/cpp_headers/bdev_module.o 00:04:38.404 LINK nvme_dp 00:04:38.703 LINK reactor 00:04:38.703 CXX test/cpp_headers/bdev_zone.o 00:04:38.703 CXX test/cpp_headers/bit_array.o 00:04:38.703 CC test/nvme/overhead/overhead.o 00:04:38.703 CXX test/cpp_headers/bit_pool.o 00:04:38.703 CXX test/cpp_headers/blob_bdev.o 00:04:38.703 CC test/event/reactor_perf/reactor_perf.o 00:04:38.703 CC test/env/pci/pci_ut.o 00:04:38.703 LINK memory_ut 00:04:38.962 CC test/event/app_repeat/app_repeat.o 00:04:38.962 CXX test/cpp_headers/blobfs_bdev.o 00:04:38.962 CC test/rpc_client/rpc_client_test.o 00:04:38.962 CC test/event/scheduler/scheduler.o 00:04:38.962 LINK reactor_perf 00:04:38.962 LINK overhead 00:04:38.962 CXX test/cpp_headers/blobfs.o 00:04:38.962 LINK app_repeat 00:04:38.962 LINK rpc_client_test 00:04:38.962 CXX test/cpp_headers/blob.o 00:04:38.962 CC test/thread/poller_perf/poller_perf.o 00:04:39.222 CXX test/cpp_headers/conf.o 00:04:39.222 CC test/nvme/err_injection/err_injection.o 00:04:39.222 LINK scheduler 00:04:39.222 CXX test/cpp_headers/config.o 00:04:39.222 LINK pci_ut 00:04:39.222 CXX test/cpp_headers/cpuset.o 00:04:39.222 LINK poller_perf 00:04:39.222 CC test/nvme/startup/startup.o 00:04:39.222 CC test/nvme/reserve/reserve.o 00:04:39.222 CC test/nvme/simple_copy/simple_copy.o 00:04:39.222 LINK err_injection 00:04:39.482 CC test/nvme/connect_stress/connect_stress.o 00:04:39.482 CXX test/cpp_headers/crc16.o 00:04:39.482 CC test/nvme/boot_partition/boot_partition.o 00:04:39.482 LINK startup 00:04:39.482 CC test/nvme/compliance/nvme_compliance.o 00:04:39.482 LINK reserve 00:04:39.482 CXX test/cpp_headers/crc32.o 00:04:39.482 LINK connect_stress 00:04:39.482 LINK simple_copy 00:04:39.482 LINK boot_partition 00:04:39.745 CC test/nvme/fused_ordering/fused_ordering.o 00:04:39.745 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:39.745 CXX test/cpp_headers/crc64.o 00:04:39.745 CC test/nvme/fdp/fdp.o 00:04:39.745 CXX test/cpp_headers/dif.o 00:04:39.745 CC test/nvme/cuse/cuse.o 00:04:39.745 CXX test/cpp_headers/dma.o 00:04:39.745 CXX test/cpp_headers/endian.o 00:04:39.745 LINK nvme_compliance 00:04:39.745 LINK fused_ordering 00:04:39.745 LINK doorbell_aers 00:04:39.745 CXX test/cpp_headers/env_dpdk.o 00:04:40.003 CXX test/cpp_headers/env.o 00:04:40.003 CXX test/cpp_headers/event.o 00:04:40.003 CXX test/cpp_headers/fd_group.o 00:04:40.003 CXX test/cpp_headers/fd.o 00:04:40.003 LINK fdp 00:04:40.003 CXX test/cpp_headers/file.o 00:04:40.003 CXX test/cpp_headers/ftl.o 00:04:40.003 CXX test/cpp_headers/gpt_spec.o 00:04:40.003 CXX test/cpp_headers/hexlify.o 00:04:40.262 CXX test/cpp_headers/histogram_data.o 00:04:40.262 CXX test/cpp_headers/idxd.o 00:04:40.262 CXX test/cpp_headers/idxd_spec.o 00:04:40.262 CXX test/cpp_headers/init.o 00:04:40.262 CXX test/cpp_headers/ioat.o 00:04:40.262 CXX test/cpp_headers/ioat_spec.o 00:04:40.262 CXX test/cpp_headers/iscsi_spec.o 00:04:40.262 CXX test/cpp_headers/json.o 00:04:40.262 CXX test/cpp_headers/jsonrpc.o 00:04:40.262 CXX test/cpp_headers/likely.o 00:04:40.262 CXX test/cpp_headers/log.o 00:04:40.262 CXX test/cpp_headers/lvol.o 00:04:40.262 CXX test/cpp_headers/memory.o 00:04:40.520 CXX test/cpp_headers/mmio.o 00:04:40.521 CXX test/cpp_headers/nbd.o 00:04:40.521 CXX test/cpp_headers/notify.o 00:04:40.521 CXX test/cpp_headers/nvme.o 00:04:40.521 CXX test/cpp_headers/nvme_intel.o 00:04:40.521 CXX test/cpp_headers/nvme_ocssd.o 00:04:40.521 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:40.521 CXX test/cpp_headers/nvme_spec.o 00:04:40.521 CXX test/cpp_headers/nvme_zns.o 00:04:40.521 CXX test/cpp_headers/nvmf_cmd.o 00:04:40.521 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:40.521 CXX test/cpp_headers/nvmf.o 00:04:40.779 CXX test/cpp_headers/nvmf_spec.o 00:04:40.779 CXX test/cpp_headers/nvmf_transport.o 00:04:40.779 CXX test/cpp_headers/opal.o 00:04:40.779 CXX test/cpp_headers/opal_spec.o 00:04:40.779 CXX test/cpp_headers/pci_ids.o 00:04:40.779 CXX test/cpp_headers/pipe.o 00:04:40.779 CXX test/cpp_headers/queue.o 00:04:40.779 LINK cuse 00:04:40.779 CXX test/cpp_headers/reduce.o 00:04:40.779 CXX test/cpp_headers/rpc.o 00:04:40.779 CXX test/cpp_headers/scheduler.o 00:04:40.779 CXX test/cpp_headers/scsi.o 00:04:40.779 CXX test/cpp_headers/scsi_spec.o 00:04:40.779 CXX test/cpp_headers/sock.o 00:04:40.779 CXX test/cpp_headers/stdinc.o 00:04:41.037 CXX test/cpp_headers/string.o 00:04:41.037 CXX test/cpp_headers/thread.o 00:04:41.037 CXX test/cpp_headers/trace.o 00:04:41.037 CXX test/cpp_headers/trace_parser.o 00:04:41.037 CXX test/cpp_headers/tree.o 00:04:41.037 CXX test/cpp_headers/ublk.o 00:04:41.037 CXX test/cpp_headers/util.o 00:04:41.037 CXX test/cpp_headers/uuid.o 00:04:41.037 CXX test/cpp_headers/version.o 00:04:41.037 CXX test/cpp_headers/vfio_user_pci.o 00:04:41.037 CXX test/cpp_headers/vfio_user_spec.o 00:04:41.037 CXX test/cpp_headers/vhost.o 00:04:41.295 CXX test/cpp_headers/vmd.o 00:04:41.295 CXX test/cpp_headers/xor.o 00:04:41.295 CXX test/cpp_headers/zipf.o 00:04:42.671 LINK esnap 00:04:43.237 00:04:43.237 real 0m51.933s 00:04:43.237 user 4m55.206s 00:04:43.237 sys 0m55.959s 00:04:43.237 07:50:48 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:04:43.237 07:50:48 -- common/autotest_common.sh@10 -- $ set +x 00:04:43.237 ************************************ 00:04:43.237 END TEST make 00:04:43.237 ************************************ 00:04:43.496 07:50:49 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:43.496 07:50:49 -- nvmf/common.sh@7 -- # uname -s 00:04:43.496 07:50:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:43.496 07:50:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:43.496 07:50:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:43.496 07:50:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:43.496 07:50:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:43.496 07:50:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:43.496 07:50:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:43.496 07:50:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:43.496 07:50:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:43.496 07:50:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:43.496 07:50:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:04:43.496 07:50:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:04:43.496 07:50:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:43.496 07:50:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:43.496 07:50:49 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:43.496 07:50:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:43.496 07:50:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:43.496 07:50:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:43.496 07:50:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:43.496 07:50:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.496 07:50:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.496 07:50:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.496 07:50:49 -- paths/export.sh@5 -- # export PATH 00:04:43.496 07:50:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:43.496 07:50:49 -- nvmf/common.sh@46 -- # : 0 00:04:43.496 07:50:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:04:43.496 07:50:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:04:43.496 07:50:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:04:43.496 07:50:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:43.496 07:50:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:43.496 07:50:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:04:43.496 07:50:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:04:43.496 07:50:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:04:43.496 07:50:49 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:43.496 07:50:49 -- spdk/autotest.sh@32 -- # uname -s 00:04:43.496 07:50:49 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:43.496 07:50:49 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:43.496 07:50:49 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:43.496 07:50:49 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:43.496 07:50:49 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:43.496 07:50:49 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:43.496 07:50:49 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:43.496 07:50:49 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:43.496 07:50:49 -- spdk/autotest.sh@48 -- # udevadm_pid=59671 00:04:43.496 07:50:49 -- spdk/autotest.sh@51 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:04:43.496 07:50:49 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:43.496 07:50:49 -- spdk/autotest.sh@54 -- # echo 59678 00:04:43.496 07:50:49 -- spdk/autotest.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:43.496 07:50:49 -- spdk/autotest.sh@56 -- # echo 59682 00:04:43.496 07:50:49 -- spdk/autotest.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power 00:04:43.496 07:50:49 -- spdk/autotest.sh@58 -- # [[ QEMU != QEMU ]] 00:04:43.496 07:50:49 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:43.496 07:50:49 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:04:43.496 07:50:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:04:43.496 07:50:49 -- common/autotest_common.sh@10 -- # set +x 00:04:43.496 07:50:49 -- spdk/autotest.sh@70 -- # create_test_list 00:04:43.496 07:50:49 -- common/autotest_common.sh@736 -- # xtrace_disable 00:04:43.496 07:50:49 -- common/autotest_common.sh@10 -- # set +x 00:04:43.496 07:50:49 -- spdk/autotest.sh@72 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:43.496 07:50:49 -- spdk/autotest.sh@72 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:43.496 07:50:49 -- spdk/autotest.sh@72 -- # src=/home/vagrant/spdk_repo/spdk 00:04:43.496 07:50:49 -- spdk/autotest.sh@73 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:43.496 07:50:49 -- spdk/autotest.sh@74 -- # cd /home/vagrant/spdk_repo/spdk 00:04:43.496 07:50:49 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:04:43.496 07:50:49 -- common/autotest_common.sh@1440 -- # uname 00:04:43.496 07:50:49 -- common/autotest_common.sh@1440 -- # '[' Linux = FreeBSD ']' 00:04:43.496 07:50:49 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:04:43.496 07:50:49 -- common/autotest_common.sh@1460 -- # uname 00:04:43.496 07:50:49 -- common/autotest_common.sh@1460 -- # [[ Linux = FreeBSD ]] 00:04:43.496 07:50:49 -- spdk/autotest.sh@82 -- # grep CC_TYPE mk/cc.mk 00:04:43.496 07:50:49 -- spdk/autotest.sh@82 -- # CC_TYPE=CC_TYPE=gcc 00:04:43.496 07:50:49 -- spdk/autotest.sh@83 -- # hash lcov 00:04:43.496 07:50:49 -- spdk/autotest.sh@83 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:04:43.496 07:50:49 -- spdk/autotest.sh@91 -- # export 'LCOV_OPTS= 00:04:43.496 --rc lcov_branch_coverage=1 00:04:43.496 --rc lcov_function_coverage=1 00:04:43.496 --rc genhtml_branch_coverage=1 00:04:43.496 --rc genhtml_function_coverage=1 00:04:43.496 --rc genhtml_legend=1 00:04:43.496 --rc geninfo_all_blocks=1 00:04:43.496 ' 00:04:43.496 07:50:49 -- spdk/autotest.sh@91 -- # LCOV_OPTS=' 00:04:43.496 --rc lcov_branch_coverage=1 00:04:43.496 --rc lcov_function_coverage=1 00:04:43.496 --rc genhtml_branch_coverage=1 00:04:43.496 --rc genhtml_function_coverage=1 00:04:43.496 --rc genhtml_legend=1 00:04:43.496 --rc geninfo_all_blocks=1 00:04:43.496 ' 00:04:43.496 07:50:49 -- spdk/autotest.sh@92 -- # export 'LCOV=lcov 00:04:43.496 --rc lcov_branch_coverage=1 00:04:43.496 --rc lcov_function_coverage=1 00:04:43.496 --rc genhtml_branch_coverage=1 00:04:43.496 --rc genhtml_function_coverage=1 00:04:43.496 --rc genhtml_legend=1 00:04:43.496 --rc geninfo_all_blocks=1 00:04:43.496 --no-external' 00:04:43.496 07:50:49 -- spdk/autotest.sh@92 -- # LCOV='lcov 00:04:43.496 --rc lcov_branch_coverage=1 00:04:43.496 --rc lcov_function_coverage=1 00:04:43.496 --rc genhtml_branch_coverage=1 00:04:43.496 --rc genhtml_function_coverage=1 00:04:43.496 --rc genhtml_legend=1 00:04:43.496 --rc geninfo_all_blocks=1 00:04:43.496 --no-external' 00:04:43.496 07:50:49 -- spdk/autotest.sh@94 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:04:43.754 lcov: LCOV version 1.14 00:04:43.754 07:50:49 -- spdk/autotest.sh@96 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:51.866 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:04:51.866 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:04:51.866 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:04:51.866 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:04:51.866 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:04:51.866 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:05:06.768 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:05:06.768 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:05:06.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:05:06.769 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:05:06.769 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:05:07.027 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:05:07.028 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:05:07.028 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:05:07.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:05:07.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:05:07.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:05:07.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:05:07.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:05:07.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:05:07.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:05:07.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:05:07.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:05:07.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:05:07.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:05:07.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:05:07.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:05:07.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:05:07.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:05:07.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:05:07.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:05:07.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:05:07.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:05:07.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:05:07.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:05:07.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:05:07.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:05:07.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:05:07.286 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:05:07.286 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:05:10.571 07:51:16 -- spdk/autotest.sh@100 -- # timing_enter pre_cleanup 00:05:10.571 07:51:16 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:10.571 07:51:16 -- common/autotest_common.sh@10 -- # set +x 00:05:10.571 07:51:16 -- spdk/autotest.sh@102 -- # rm -f 00:05:10.571 07:51:16 -- spdk/autotest.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:10.830 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:11.090 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:05:11.090 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:05:11.090 07:51:16 -- spdk/autotest.sh@107 -- # get_zoned_devs 00:05:11.090 07:51:16 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:11.090 07:51:16 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:11.090 07:51:16 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:11.090 07:51:16 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:11.090 07:51:16 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:11.090 07:51:16 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:11.090 07:51:16 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:11.090 07:51:16 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:11.090 07:51:16 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:11.090 07:51:16 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:11.090 07:51:16 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:11.090 07:51:16 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:11.090 07:51:16 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:11.090 07:51:16 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:11.090 07:51:16 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:11.090 07:51:16 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:11.090 07:51:16 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:11.090 07:51:16 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:11.090 07:51:16 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:11.090 07:51:16 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:11.090 07:51:16 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:11.090 07:51:16 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:11.090 07:51:16 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:11.090 07:51:16 -- spdk/autotest.sh@109 -- # (( 0 > 0 )) 00:05:11.090 07:51:16 -- spdk/autotest.sh@121 -- # ls /dev/nvme0n1 /dev/nvme1n1 /dev/nvme1n2 /dev/nvme1n3 00:05:11.090 07:51:16 -- spdk/autotest.sh@121 -- # grep -v p 00:05:11.090 07:51:16 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:11.090 07:51:16 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:11.090 07:51:16 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme0n1 00:05:11.090 07:51:16 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:05:11.090 07:51:16 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:11.090 No valid GPT data, bailing 00:05:11.090 07:51:16 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:11.090 07:51:16 -- scripts/common.sh@393 -- # pt= 00:05:11.090 07:51:16 -- scripts/common.sh@394 -- # return 1 00:05:11.090 07:51:16 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:11.090 1+0 records in 00:05:11.090 1+0 records out 00:05:11.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00473285 s, 222 MB/s 00:05:11.090 07:51:16 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:11.090 07:51:16 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:11.090 07:51:16 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n1 00:05:11.090 07:51:16 -- scripts/common.sh@380 -- # local block=/dev/nvme1n1 pt 00:05:11.090 07:51:16 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:11.090 No valid GPT data, bailing 00:05:11.090 07:51:16 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:11.090 07:51:16 -- scripts/common.sh@393 -- # pt= 00:05:11.090 07:51:16 -- scripts/common.sh@394 -- # return 1 00:05:11.090 07:51:16 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:11.090 1+0 records in 00:05:11.090 1+0 records out 00:05:11.090 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00426509 s, 246 MB/s 00:05:11.090 07:51:16 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:11.090 07:51:16 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:11.090 07:51:16 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n2 00:05:11.090 07:51:16 -- scripts/common.sh@380 -- # local block=/dev/nvme1n2 pt 00:05:11.090 07:51:16 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:11.349 No valid GPT data, bailing 00:05:11.349 07:51:16 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:11.349 07:51:16 -- scripts/common.sh@393 -- # pt= 00:05:11.349 07:51:16 -- scripts/common.sh@394 -- # return 1 00:05:11.349 07:51:16 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:11.349 1+0 records in 00:05:11.349 1+0 records out 00:05:11.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00368314 s, 285 MB/s 00:05:11.349 07:51:16 -- spdk/autotest.sh@121 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:05:11.349 07:51:16 -- spdk/autotest.sh@123 -- # [[ -z '' ]] 00:05:11.349 07:51:16 -- spdk/autotest.sh@124 -- # block_in_use /dev/nvme1n3 00:05:11.349 07:51:16 -- scripts/common.sh@380 -- # local block=/dev/nvme1n3 pt 00:05:11.349 07:51:16 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:11.349 No valid GPT data, bailing 00:05:11.349 07:51:17 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:11.349 07:51:17 -- scripts/common.sh@393 -- # pt= 00:05:11.349 07:51:17 -- scripts/common.sh@394 -- # return 1 00:05:11.349 07:51:17 -- spdk/autotest.sh@125 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:11.349 1+0 records in 00:05:11.349 1+0 records out 00:05:11.349 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00365333 s, 287 MB/s 00:05:11.349 07:51:17 -- spdk/autotest.sh@129 -- # sync 00:05:11.349 07:51:17 -- spdk/autotest.sh@131 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:11.349 07:51:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:11.349 07:51:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:13.250 07:51:18 -- spdk/autotest.sh@135 -- # uname -s 00:05:13.250 07:51:18 -- spdk/autotest.sh@135 -- # '[' Linux = Linux ']' 00:05:13.250 07:51:18 -- spdk/autotest.sh@136 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:13.250 07:51:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.250 07:51:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.250 07:51:18 -- common/autotest_common.sh@10 -- # set +x 00:05:13.250 ************************************ 00:05:13.250 START TEST setup.sh 00:05:13.250 ************************************ 00:05:13.250 07:51:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:05:13.250 * Looking for test storage... 00:05:13.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:13.250 07:51:18 -- setup/test-setup.sh@10 -- # uname -s 00:05:13.250 07:51:18 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:05:13.250 07:51:18 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:13.250 07:51:18 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:13.250 07:51:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:13.250 07:51:18 -- common/autotest_common.sh@10 -- # set +x 00:05:13.250 ************************************ 00:05:13.250 START TEST acl 00:05:13.250 ************************************ 00:05:13.250 07:51:18 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:05:13.250 * Looking for test storage... 00:05:13.509 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:13.509 07:51:19 -- setup/acl.sh@10 -- # get_zoned_devs 00:05:13.509 07:51:19 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:13.509 07:51:19 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:13.509 07:51:19 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:13.509 07:51:19 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:13.509 07:51:19 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:13.509 07:51:19 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:13.509 07:51:19 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:13.509 07:51:19 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:13.509 07:51:19 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:13.509 07:51:19 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:13.509 07:51:19 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:13.509 07:51:19 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:13.509 07:51:19 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:13.509 07:51:19 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:13.509 07:51:19 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:13.509 07:51:19 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:13.509 07:51:19 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:13.509 07:51:19 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:13.509 07:51:19 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:13.509 07:51:19 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:13.509 07:51:19 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:13.509 07:51:19 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:13.509 07:51:19 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:13.509 07:51:19 -- setup/acl.sh@12 -- # devs=() 00:05:13.509 07:51:19 -- setup/acl.sh@12 -- # declare -a devs 00:05:13.509 07:51:19 -- setup/acl.sh@13 -- # drivers=() 00:05:13.509 07:51:19 -- setup/acl.sh@13 -- # declare -A drivers 00:05:13.509 07:51:19 -- setup/acl.sh@51 -- # setup reset 00:05:13.509 07:51:19 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:13.509 07:51:19 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:14.077 07:51:19 -- setup/acl.sh@52 -- # collect_setup_devs 00:05:14.077 07:51:19 -- setup/acl.sh@16 -- # local dev driver 00:05:14.077 07:51:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.077 07:51:19 -- setup/acl.sh@15 -- # setup output status 00:05:14.077 07:51:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.077 07:51:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:14.335 Hugepages 00:05:14.335 node hugesize free / total 00:05:14.335 07:51:19 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:05:14.335 07:51:19 -- setup/acl.sh@19 -- # continue 00:05:14.335 07:51:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.335 00:05:14.335 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:14.335 07:51:19 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:05:14.335 07:51:19 -- setup/acl.sh@19 -- # continue 00:05:14.335 07:51:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.335 07:51:19 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:05:14.335 07:51:19 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:05:14.335 07:51:19 -- setup/acl.sh@20 -- # continue 00:05:14.335 07:51:19 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.335 07:51:20 -- setup/acl.sh@19 -- # [[ 0000:00:06.0 == *:*:*.* ]] 00:05:14.335 07:51:20 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:14.335 07:51:20 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:14.335 07:51:20 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:14.335 07:51:20 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:14.335 07:51:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.335 07:51:20 -- setup/acl.sh@19 -- # [[ 0000:00:07.0 == *:*:*.* ]] 00:05:14.335 07:51:20 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:05:14.335 07:51:20 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:14.335 07:51:20 -- setup/acl.sh@22 -- # devs+=("$dev") 00:05:14.335 07:51:20 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:05:14.335 07:51:20 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:05:14.335 07:51:20 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:05:14.335 07:51:20 -- setup/acl.sh@54 -- # run_test denied denied 00:05:14.335 07:51:20 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:14.335 07:51:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:14.335 07:51:20 -- common/autotest_common.sh@10 -- # set +x 00:05:14.335 ************************************ 00:05:14.335 START TEST denied 00:05:14.335 ************************************ 00:05:14.335 07:51:20 -- common/autotest_common.sh@1104 -- # denied 00:05:14.335 07:51:20 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:06.0' 00:05:14.335 07:51:20 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:06.0' 00:05:14.336 07:51:20 -- setup/acl.sh@38 -- # setup output config 00:05:14.336 07:51:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:14.336 07:51:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:15.271 0000:00:06.0 (1b36 0010): Skipping denied controller at 0000:00:06.0 00:05:15.271 07:51:20 -- setup/acl.sh@40 -- # verify 0000:00:06.0 00:05:15.271 07:51:20 -- setup/acl.sh@28 -- # local dev driver 00:05:15.271 07:51:20 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:15.271 07:51:20 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:06.0 ]] 00:05:15.271 07:51:20 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:06.0/driver 00:05:15.271 07:51:20 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:15.271 07:51:20 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:15.271 07:51:20 -- setup/acl.sh@41 -- # setup reset 00:05:15.271 07:51:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:15.271 07:51:20 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:15.838 00:05:15.838 real 0m1.406s 00:05:15.838 user 0m0.564s 00:05:15.838 sys 0m0.799s 00:05:15.838 07:51:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.838 07:51:21 -- common/autotest_common.sh@10 -- # set +x 00:05:15.838 ************************************ 00:05:15.838 END TEST denied 00:05:15.838 ************************************ 00:05:15.838 07:51:21 -- setup/acl.sh@55 -- # run_test allowed allowed 00:05:15.838 07:51:21 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:15.838 07:51:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:15.838 07:51:21 -- common/autotest_common.sh@10 -- # set +x 00:05:15.838 ************************************ 00:05:15.838 START TEST allowed 00:05:15.838 ************************************ 00:05:15.838 07:51:21 -- common/autotest_common.sh@1104 -- # allowed 00:05:15.838 07:51:21 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:06.0 00:05:15.838 07:51:21 -- setup/acl.sh@45 -- # setup output config 00:05:15.838 07:51:21 -- setup/acl.sh@46 -- # grep -E '0000:00:06.0 .*: nvme -> .*' 00:05:15.838 07:51:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.838 07:51:21 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:16.773 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:16.773 07:51:22 -- setup/acl.sh@47 -- # verify 0000:00:07.0 00:05:16.773 07:51:22 -- setup/acl.sh@28 -- # local dev driver 00:05:16.773 07:51:22 -- setup/acl.sh@30 -- # for dev in "$@" 00:05:16.773 07:51:22 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:07.0 ]] 00:05:16.773 07:51:22 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:07.0/driver 00:05:16.773 07:51:22 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:05:16.773 07:51:22 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:05:16.773 07:51:22 -- setup/acl.sh@48 -- # setup reset 00:05:16.773 07:51:22 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:16.773 07:51:22 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:17.340 00:05:17.340 real 0m1.510s 00:05:17.340 user 0m0.673s 00:05:17.340 sys 0m0.829s 00:05:17.340 07:51:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.340 07:51:23 -- common/autotest_common.sh@10 -- # set +x 00:05:17.340 ************************************ 00:05:17.340 END TEST allowed 00:05:17.340 ************************************ 00:05:17.340 00:05:17.340 real 0m4.146s 00:05:17.340 user 0m1.799s 00:05:17.340 sys 0m2.320s 00:05:17.340 07:51:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:17.340 07:51:23 -- common/autotest_common.sh@10 -- # set +x 00:05:17.340 ************************************ 00:05:17.340 END TEST acl 00:05:17.340 ************************************ 00:05:17.599 07:51:23 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:17.599 07:51:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.599 07:51:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.599 07:51:23 -- common/autotest_common.sh@10 -- # set +x 00:05:17.599 ************************************ 00:05:17.599 START TEST hugepages 00:05:17.599 ************************************ 00:05:17.599 07:51:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:05:17.599 * Looking for test storage... 00:05:17.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:17.599 07:51:23 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:05:17.599 07:51:23 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:05:17.599 07:51:23 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:05:17.599 07:51:23 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:05:17.599 07:51:23 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:05:17.599 07:51:23 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:05:17.599 07:51:23 -- setup/common.sh@17 -- # local get=Hugepagesize 00:05:17.599 07:51:23 -- setup/common.sh@18 -- # local node= 00:05:17.599 07:51:23 -- setup/common.sh@19 -- # local var val 00:05:17.599 07:51:23 -- setup/common.sh@20 -- # local mem_f mem 00:05:17.599 07:51:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:17.599 07:51:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:17.599 07:51:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:17.599 07:51:23 -- setup/common.sh@28 -- # mapfile -t mem 00:05:17.599 07:51:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 4893064 kB' 'MemAvailable: 7373688 kB' 'Buffers: 2436 kB' 'Cached: 2686168 kB' 'SwapCached: 0 kB' 'Active: 434132 kB' 'Inactive: 2357212 kB' 'Active(anon): 113232 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357212 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 104308 kB' 'Mapped: 48856 kB' 'Shmem: 10492 kB' 'KReclaimable: 79348 kB' 'Slab: 158292 kB' 'SReclaimable: 79348 kB' 'SUnreclaim: 78944 kB' 'KernelStack: 6572 kB' 'PageTables: 4476 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412436 kB' 'Committed_AS: 334592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.599 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.599 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # continue 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # IFS=': ' 00:05:17.600 07:51:23 -- setup/common.sh@31 -- # read -r var val _ 00:05:17.600 07:51:23 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:05:17.600 07:51:23 -- setup/common.sh@33 -- # echo 2048 00:05:17.600 07:51:23 -- setup/common.sh@33 -- # return 0 00:05:17.600 07:51:23 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:05:17.600 07:51:23 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:05:17.600 07:51:23 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:05:17.600 07:51:23 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:05:17.600 07:51:23 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:05:17.600 07:51:23 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:05:17.600 07:51:23 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:05:17.600 07:51:23 -- setup/hugepages.sh@207 -- # get_nodes 00:05:17.600 07:51:23 -- setup/hugepages.sh@27 -- # local node 00:05:17.600 07:51:23 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:17.600 07:51:23 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:05:17.600 07:51:23 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:17.600 07:51:23 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:17.600 07:51:23 -- setup/hugepages.sh@208 -- # clear_hp 00:05:17.600 07:51:23 -- setup/hugepages.sh@37 -- # local node hp 00:05:17.600 07:51:23 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:17.600 07:51:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:17.600 07:51:23 -- setup/hugepages.sh@41 -- # echo 0 00:05:17.600 07:51:23 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:17.600 07:51:23 -- setup/hugepages.sh@41 -- # echo 0 00:05:17.600 07:51:23 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:17.600 07:51:23 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:17.600 07:51:23 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:05:17.600 07:51:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:17.600 07:51:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:17.600 07:51:23 -- common/autotest_common.sh@10 -- # set +x 00:05:17.600 ************************************ 00:05:17.600 START TEST default_setup 00:05:17.600 ************************************ 00:05:17.600 07:51:23 -- common/autotest_common.sh@1104 -- # default_setup 00:05:17.600 07:51:23 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:05:17.600 07:51:23 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:17.600 07:51:23 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:17.600 07:51:23 -- setup/hugepages.sh@51 -- # shift 00:05:17.600 07:51:23 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:17.600 07:51:23 -- setup/hugepages.sh@52 -- # local node_ids 00:05:17.600 07:51:23 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:17.600 07:51:23 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:17.600 07:51:23 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:17.600 07:51:23 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:17.600 07:51:23 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:17.600 07:51:23 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:17.600 07:51:23 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:17.600 07:51:23 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:17.600 07:51:23 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:17.600 07:51:23 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:17.600 07:51:23 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:17.601 07:51:23 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:17.601 07:51:23 -- setup/hugepages.sh@73 -- # return 0 00:05:17.601 07:51:23 -- setup/hugepages.sh@137 -- # setup output 00:05:17.601 07:51:23 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:17.601 07:51:23 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.428 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.428 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:18.428 07:51:24 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:05:18.428 07:51:24 -- setup/hugepages.sh@89 -- # local node 00:05:18.428 07:51:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:18.428 07:51:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:18.428 07:51:24 -- setup/hugepages.sh@92 -- # local surp 00:05:18.428 07:51:24 -- setup/hugepages.sh@93 -- # local resv 00:05:18.428 07:51:24 -- setup/hugepages.sh@94 -- # local anon 00:05:18.428 07:51:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:18.428 07:51:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:18.428 07:51:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:18.428 07:51:24 -- setup/common.sh@18 -- # local node= 00:05:18.428 07:51:24 -- setup/common.sh@19 -- # local var val 00:05:18.428 07:51:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.428 07:51:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.428 07:51:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.428 07:51:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.428 07:51:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.428 07:51:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 07:51:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6983432 kB' 'MemAvailable: 9463896 kB' 'Buffers: 2436 kB' 'Cached: 2686160 kB' 'SwapCached: 0 kB' 'Active: 450680 kB' 'Inactive: 2357224 kB' 'Active(anon): 129780 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357224 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120252 kB' 'Mapped: 49020 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157852 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78848 kB' 'KernelStack: 6512 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.428 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.428 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:18.429 07:51:24 -- setup/common.sh@33 -- # echo 0 00:05:18.429 07:51:24 -- setup/common.sh@33 -- # return 0 00:05:18.429 07:51:24 -- setup/hugepages.sh@97 -- # anon=0 00:05:18.429 07:51:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:18.429 07:51:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.429 07:51:24 -- setup/common.sh@18 -- # local node= 00:05:18.429 07:51:24 -- setup/common.sh@19 -- # local var val 00:05:18.429 07:51:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.429 07:51:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.429 07:51:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.429 07:51:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.429 07:51:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.429 07:51:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6983236 kB' 'MemAvailable: 9463704 kB' 'Buffers: 2436 kB' 'Cached: 2686160 kB' 'SwapCached: 0 kB' 'Active: 450020 kB' 'Inactive: 2357228 kB' 'Active(anon): 129120 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120220 kB' 'Mapped: 49080 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157852 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78848 kB' 'KernelStack: 6496 kB' 'PageTables: 4336 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.429 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.429 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.430 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.430 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.431 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.431 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.431 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.431 07:51:24 -- setup/common.sh@33 -- # echo 0 00:05:18.431 07:51:24 -- setup/common.sh@33 -- # return 0 00:05:18.431 07:51:24 -- setup/hugepages.sh@99 -- # surp=0 00:05:18.431 07:51:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:18.431 07:51:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:18.431 07:51:24 -- setup/common.sh@18 -- # local node= 00:05:18.431 07:51:24 -- setup/common.sh@19 -- # local var val 00:05:18.431 07:51:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.431 07:51:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.431 07:51:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.431 07:51:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.431 07:51:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.431 07:51:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.431 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.431 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.431 07:51:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6983236 kB' 'MemAvailable: 9463704 kB' 'Buffers: 2436 kB' 'Cached: 2686160 kB' 'SwapCached: 0 kB' 'Active: 450224 kB' 'Inactive: 2357228 kB' 'Active(anon): 129324 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120424 kB' 'Mapped: 49080 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157852 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78848 kB' 'KernelStack: 6464 kB' 'PageTables: 4236 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.691 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.691 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:18.692 07:51:24 -- setup/common.sh@33 -- # echo 0 00:05:18.692 07:51:24 -- setup/common.sh@33 -- # return 0 00:05:18.692 07:51:24 -- setup/hugepages.sh@100 -- # resv=0 00:05:18.692 07:51:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:18.692 nr_hugepages=1024 00:05:18.692 resv_hugepages=0 00:05:18.692 07:51:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:18.692 surplus_hugepages=0 00:05:18.692 07:51:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:18.692 anon_hugepages=0 00:05:18.692 07:51:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:18.692 07:51:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.692 07:51:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:18.692 07:51:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:18.692 07:51:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:18.692 07:51:24 -- setup/common.sh@18 -- # local node= 00:05:18.692 07:51:24 -- setup/common.sh@19 -- # local var val 00:05:18.692 07:51:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.692 07:51:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.692 07:51:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:18.692 07:51:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:18.692 07:51:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.692 07:51:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6983236 kB' 'MemAvailable: 9463704 kB' 'Buffers: 2436 kB' 'Cached: 2686160 kB' 'SwapCached: 0 kB' 'Active: 450028 kB' 'Inactive: 2357228 kB' 'Active(anon): 129128 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120256 kB' 'Mapped: 48888 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157856 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78852 kB' 'KernelStack: 6496 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.692 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.692 07:51:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:18.693 07:51:24 -- setup/common.sh@33 -- # echo 1024 00:05:18.693 07:51:24 -- setup/common.sh@33 -- # return 0 00:05:18.693 07:51:24 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:18.693 07:51:24 -- setup/hugepages.sh@112 -- # get_nodes 00:05:18.693 07:51:24 -- setup/hugepages.sh@27 -- # local node 00:05:18.693 07:51:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:18.693 07:51:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:18.693 07:51:24 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:18.693 07:51:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:18.693 07:51:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:18.693 07:51:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:18.693 07:51:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:18.693 07:51:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:18.693 07:51:24 -- setup/common.sh@18 -- # local node=0 00:05:18.693 07:51:24 -- setup/common.sh@19 -- # local var val 00:05:18.693 07:51:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:18.693 07:51:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:18.693 07:51:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:18.693 07:51:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:18.693 07:51:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:18.693 07:51:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6982732 kB' 'MemUsed: 5259240 kB' 'SwapCached: 0 kB' 'Active: 450252 kB' 'Inactive: 2357228 kB' 'Active(anon): 129352 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 2688596 kB' 'Mapped: 48888 kB' 'AnonPages: 120476 kB' 'Shmem: 10468 kB' 'KernelStack: 6496 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79004 kB' 'Slab: 157848 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78844 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.693 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.693 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # continue 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:18.694 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:18.694 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:18.694 07:51:24 -- setup/common.sh@33 -- # echo 0 00:05:18.694 07:51:24 -- setup/common.sh@33 -- # return 0 00:05:18.694 07:51:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:18.694 07:51:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:18.694 07:51:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:18.694 07:51:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:18.694 node0=1024 expecting 1024 00:05:18.694 07:51:24 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:18.694 07:51:24 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:18.694 00:05:18.694 real 0m1.022s 00:05:18.694 user 0m0.487s 00:05:18.694 sys 0m0.496s 00:05:18.694 07:51:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.694 07:51:24 -- common/autotest_common.sh@10 -- # set +x 00:05:18.694 ************************************ 00:05:18.694 END TEST default_setup 00:05:18.694 ************************************ 00:05:18.694 07:51:24 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:05:18.694 07:51:24 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:18.694 07:51:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:18.694 07:51:24 -- common/autotest_common.sh@10 -- # set +x 00:05:18.694 ************************************ 00:05:18.694 START TEST per_node_1G_alloc 00:05:18.694 ************************************ 00:05:18.694 07:51:24 -- common/autotest_common.sh@1104 -- # per_node_1G_alloc 00:05:18.694 07:51:24 -- setup/hugepages.sh@143 -- # local IFS=, 00:05:18.694 07:51:24 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:05:18.694 07:51:24 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:18.694 07:51:24 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:18.694 07:51:24 -- setup/hugepages.sh@51 -- # shift 00:05:18.694 07:51:24 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:18.694 07:51:24 -- setup/hugepages.sh@52 -- # local node_ids 00:05:18.694 07:51:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:18.694 07:51:24 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:18.694 07:51:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:18.694 07:51:24 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:18.694 07:51:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:18.694 07:51:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:18.694 07:51:24 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:18.694 07:51:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:18.694 07:51:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:18.694 07:51:24 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:18.694 07:51:24 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:18.694 07:51:24 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:05:18.694 07:51:24 -- setup/hugepages.sh@73 -- # return 0 00:05:18.694 07:51:24 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:05:18.695 07:51:24 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:05:18.695 07:51:24 -- setup/hugepages.sh@146 -- # setup output 00:05:18.695 07:51:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:18.695 07:51:24 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:18.953 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:18.953 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:18.953 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.215 07:51:24 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:05:19.215 07:51:24 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:05:19.215 07:51:24 -- setup/hugepages.sh@89 -- # local node 00:05:19.215 07:51:24 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.215 07:51:24 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.215 07:51:24 -- setup/hugepages.sh@92 -- # local surp 00:05:19.215 07:51:24 -- setup/hugepages.sh@93 -- # local resv 00:05:19.215 07:51:24 -- setup/hugepages.sh@94 -- # local anon 00:05:19.215 07:51:24 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.215 07:51:24 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.215 07:51:24 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.215 07:51:24 -- setup/common.sh@18 -- # local node= 00:05:19.215 07:51:24 -- setup/common.sh@19 -- # local var val 00:05:19.215 07:51:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.215 07:51:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.215 07:51:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.215 07:51:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.215 07:51:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.215 07:51:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8029340 kB' 'MemAvailable: 10509808 kB' 'Buffers: 2436 kB' 'Cached: 2686160 kB' 'SwapCached: 0 kB' 'Active: 450832 kB' 'Inactive: 2357228 kB' 'Active(anon): 129932 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 121140 kB' 'Mapped: 49064 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157800 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78796 kB' 'KernelStack: 6616 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 352820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.215 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.215 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.216 07:51:24 -- setup/common.sh@33 -- # echo 0 00:05:19.216 07:51:24 -- setup/common.sh@33 -- # return 0 00:05:19.216 07:51:24 -- setup/hugepages.sh@97 -- # anon=0 00:05:19.216 07:51:24 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.216 07:51:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.216 07:51:24 -- setup/common.sh@18 -- # local node= 00:05:19.216 07:51:24 -- setup/common.sh@19 -- # local var val 00:05:19.216 07:51:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.216 07:51:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.216 07:51:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.216 07:51:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.216 07:51:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.216 07:51:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8029340 kB' 'MemAvailable: 10509808 kB' 'Buffers: 2436 kB' 'Cached: 2686160 kB' 'SwapCached: 0 kB' 'Active: 450136 kB' 'Inactive: 2357228 kB' 'Active(anon): 129236 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120404 kB' 'Mapped: 48824 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157804 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78800 kB' 'KernelStack: 6560 kB' 'PageTables: 4512 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.216 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.216 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.217 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.217 07:51:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.218 07:51:24 -- setup/common.sh@33 -- # echo 0 00:05:19.218 07:51:24 -- setup/common.sh@33 -- # return 0 00:05:19.218 07:51:24 -- setup/hugepages.sh@99 -- # surp=0 00:05:19.218 07:51:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.218 07:51:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.218 07:51:24 -- setup/common.sh@18 -- # local node= 00:05:19.218 07:51:24 -- setup/common.sh@19 -- # local var val 00:05:19.218 07:51:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.218 07:51:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.218 07:51:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.218 07:51:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.218 07:51:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.218 07:51:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8029340 kB' 'MemAvailable: 10509808 kB' 'Buffers: 2436 kB' 'Cached: 2686160 kB' 'SwapCached: 0 kB' 'Active: 450088 kB' 'Inactive: 2357228 kB' 'Active(anon): 129188 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120392 kB' 'Mapped: 48888 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157796 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78792 kB' 'KernelStack: 6512 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.218 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.218 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.219 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.219 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.220 07:51:24 -- setup/common.sh@33 -- # echo 0 00:05:19.220 07:51:24 -- setup/common.sh@33 -- # return 0 00:05:19.220 07:51:24 -- setup/hugepages.sh@100 -- # resv=0 00:05:19.220 nr_hugepages=512 00:05:19.220 07:51:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:19.220 resv_hugepages=0 00:05:19.220 07:51:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.220 surplus_hugepages=0 00:05:19.220 07:51:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.220 anon_hugepages=0 00:05:19.220 07:51:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.220 07:51:24 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:19.220 07:51:24 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:19.220 07:51:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.220 07:51:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.220 07:51:24 -- setup/common.sh@18 -- # local node= 00:05:19.220 07:51:24 -- setup/common.sh@19 -- # local var val 00:05:19.220 07:51:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.220 07:51:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.220 07:51:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.220 07:51:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.220 07:51:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.220 07:51:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8029340 kB' 'MemAvailable: 10509808 kB' 'Buffers: 2436 kB' 'Cached: 2686160 kB' 'SwapCached: 0 kB' 'Active: 450092 kB' 'Inactive: 2357228 kB' 'Active(anon): 129192 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120348 kB' 'Mapped: 48888 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157796 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78792 kB' 'KernelStack: 6496 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.220 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.220 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.221 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.221 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.222 07:51:24 -- setup/common.sh@33 -- # echo 512 00:05:19.222 07:51:24 -- setup/common.sh@33 -- # return 0 00:05:19.222 07:51:24 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:19.222 07:51:24 -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.222 07:51:24 -- setup/hugepages.sh@27 -- # local node 00:05:19.222 07:51:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.222 07:51:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:19.222 07:51:24 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.222 07:51:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.222 07:51:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.222 07:51:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.222 07:51:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.222 07:51:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.222 07:51:24 -- setup/common.sh@18 -- # local node=0 00:05:19.222 07:51:24 -- setup/common.sh@19 -- # local var val 00:05:19.222 07:51:24 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.222 07:51:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.222 07:51:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.222 07:51:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.222 07:51:24 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.222 07:51:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8029340 kB' 'MemUsed: 4212632 kB' 'SwapCached: 0 kB' 'Active: 450128 kB' 'Inactive: 2357228 kB' 'Active(anon): 129228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 2688596 kB' 'Mapped: 48888 kB' 'AnonPages: 120396 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79004 kB' 'Slab: 157796 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.222 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.222 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # continue 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.223 07:51:24 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.223 07:51:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.223 07:51:24 -- setup/common.sh@33 -- # echo 0 00:05:19.223 07:51:24 -- setup/common.sh@33 -- # return 0 00:05:19.223 07:51:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:19.223 07:51:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:19.223 07:51:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:19.223 07:51:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:19.223 node0=512 expecting 512 00:05:19.223 07:51:24 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:19.223 07:51:24 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:19.223 00:05:19.223 real 0m0.576s 00:05:19.223 user 0m0.285s 00:05:19.223 sys 0m0.324s 00:05:19.223 07:51:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.223 07:51:24 -- common/autotest_common.sh@10 -- # set +x 00:05:19.223 ************************************ 00:05:19.223 END TEST per_node_1G_alloc 00:05:19.223 ************************************ 00:05:19.223 07:51:25 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:05:19.223 07:51:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:19.223 07:51:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:19.223 07:51:25 -- common/autotest_common.sh@10 -- # set +x 00:05:19.223 ************************************ 00:05:19.223 START TEST even_2G_alloc 00:05:19.223 ************************************ 00:05:19.482 07:51:25 -- common/autotest_common.sh@1104 -- # even_2G_alloc 00:05:19.482 07:51:25 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:05:19.482 07:51:25 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:19.482 07:51:25 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:19.482 07:51:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:19.482 07:51:25 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:19.482 07:51:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:19.482 07:51:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:19.482 07:51:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:19.482 07:51:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:19.482 07:51:25 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:19.482 07:51:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:19.482 07:51:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:19.482 07:51:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:19.482 07:51:25 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:19.482 07:51:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:19.482 07:51:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:05:19.482 07:51:25 -- setup/hugepages.sh@83 -- # : 0 00:05:19.482 07:51:25 -- setup/hugepages.sh@84 -- # : 0 00:05:19.482 07:51:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:19.482 07:51:25 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:05:19.482 07:51:25 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:05:19.482 07:51:25 -- setup/hugepages.sh@153 -- # setup output 00:05:19.482 07:51:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.482 07:51:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:19.745 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:19.745 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.745 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:19.745 07:51:25 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:05:19.745 07:51:25 -- setup/hugepages.sh@89 -- # local node 00:05:19.745 07:51:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:19.745 07:51:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:19.745 07:51:25 -- setup/hugepages.sh@92 -- # local surp 00:05:19.745 07:51:25 -- setup/hugepages.sh@93 -- # local resv 00:05:19.745 07:51:25 -- setup/hugepages.sh@94 -- # local anon 00:05:19.745 07:51:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:19.745 07:51:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:19.745 07:51:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:19.745 07:51:25 -- setup/common.sh@18 -- # local node= 00:05:19.745 07:51:25 -- setup/common.sh@19 -- # local var val 00:05:19.745 07:51:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.745 07:51:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.745 07:51:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.745 07:51:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.745 07:51:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.745 07:51:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6977268 kB' 'MemAvailable: 9457736 kB' 'Buffers: 2436 kB' 'Cached: 2686160 kB' 'SwapCached: 0 kB' 'Active: 450396 kB' 'Inactive: 2357228 kB' 'Active(anon): 129496 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120664 kB' 'Mapped: 49036 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157748 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78744 kB' 'KernelStack: 6536 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.745 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.745 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:19.746 07:51:25 -- setup/common.sh@33 -- # echo 0 00:05:19.746 07:51:25 -- setup/common.sh@33 -- # return 0 00:05:19.746 07:51:25 -- setup/hugepages.sh@97 -- # anon=0 00:05:19.746 07:51:25 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:19.746 07:51:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.746 07:51:25 -- setup/common.sh@18 -- # local node= 00:05:19.746 07:51:25 -- setup/common.sh@19 -- # local var val 00:05:19.746 07:51:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.746 07:51:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.746 07:51:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.746 07:51:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.746 07:51:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.746 07:51:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6977268 kB' 'MemAvailable: 9457736 kB' 'Buffers: 2436 kB' 'Cached: 2686160 kB' 'SwapCached: 0 kB' 'Active: 450120 kB' 'Inactive: 2357228 kB' 'Active(anon): 129220 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120300 kB' 'Mapped: 48888 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157756 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78752 kB' 'KernelStack: 6512 kB' 'PageTables: 4360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.746 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.746 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.747 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.747 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.748 07:51:25 -- setup/common.sh@33 -- # echo 0 00:05:19.748 07:51:25 -- setup/common.sh@33 -- # return 0 00:05:19.748 07:51:25 -- setup/hugepages.sh@99 -- # surp=0 00:05:19.748 07:51:25 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:19.748 07:51:25 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:19.748 07:51:25 -- setup/common.sh@18 -- # local node= 00:05:19.748 07:51:25 -- setup/common.sh@19 -- # local var val 00:05:19.748 07:51:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.748 07:51:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.748 07:51:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.748 07:51:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.748 07:51:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.748 07:51:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6977268 kB' 'MemAvailable: 9457736 kB' 'Buffers: 2436 kB' 'Cached: 2686160 kB' 'SwapCached: 0 kB' 'Active: 450144 kB' 'Inactive: 2357228 kB' 'Active(anon): 129244 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120368 kB' 'Mapped: 48888 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157748 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78744 kB' 'KernelStack: 6496 kB' 'PageTables: 4320 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.748 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.748 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.749 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.749 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:19.750 07:51:25 -- setup/common.sh@33 -- # echo 0 00:05:19.750 07:51:25 -- setup/common.sh@33 -- # return 0 00:05:19.750 07:51:25 -- setup/hugepages.sh@100 -- # resv=0 00:05:19.750 nr_hugepages=1024 00:05:19.750 07:51:25 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:19.750 resv_hugepages=0 00:05:19.750 07:51:25 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:19.750 surplus_hugepages=0 00:05:19.750 07:51:25 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:19.750 anon_hugepages=0 00:05:19.750 07:51:25 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:19.750 07:51:25 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.750 07:51:25 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:19.750 07:51:25 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:19.750 07:51:25 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:19.750 07:51:25 -- setup/common.sh@18 -- # local node= 00:05:19.750 07:51:25 -- setup/common.sh@19 -- # local var val 00:05:19.750 07:51:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.750 07:51:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.750 07:51:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:19.750 07:51:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:19.750 07:51:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.750 07:51:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6977268 kB' 'MemAvailable: 9457736 kB' 'Buffers: 2436 kB' 'Cached: 2686160 kB' 'SwapCached: 0 kB' 'Active: 450100 kB' 'Inactive: 2357228 kB' 'Active(anon): 129200 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120336 kB' 'Mapped: 48888 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157748 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78744 kB' 'KernelStack: 6480 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.750 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.750 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.751 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.751 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:19.752 07:51:25 -- setup/common.sh@33 -- # echo 1024 00:05:19.752 07:51:25 -- setup/common.sh@33 -- # return 0 00:05:19.752 07:51:25 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:19.752 07:51:25 -- setup/hugepages.sh@112 -- # get_nodes 00:05:19.752 07:51:25 -- setup/hugepages.sh@27 -- # local node 00:05:19.752 07:51:25 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:19.752 07:51:25 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:19.752 07:51:25 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:19.752 07:51:25 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:19.752 07:51:25 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:19.752 07:51:25 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:19.752 07:51:25 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:19.752 07:51:25 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:19.752 07:51:25 -- setup/common.sh@18 -- # local node=0 00:05:19.752 07:51:25 -- setup/common.sh@19 -- # local var val 00:05:19.752 07:51:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:19.752 07:51:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:19.752 07:51:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:19.752 07:51:25 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:19.752 07:51:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:19.752 07:51:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6977268 kB' 'MemUsed: 5264704 kB' 'SwapCached: 0 kB' 'Active: 450172 kB' 'Inactive: 2357228 kB' 'Active(anon): 129272 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 2688596 kB' 'Mapped: 48888 kB' 'AnonPages: 120372 kB' 'Shmem: 10468 kB' 'KernelStack: 6480 kB' 'PageTables: 4272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79004 kB' 'Slab: 157748 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78744 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # continue 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:19.752 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:19.752 07:51:25 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.011 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.011 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.011 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.011 07:51:25 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.011 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.011 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.011 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.011 07:51:25 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.011 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.011 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.012 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.012 07:51:25 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.012 07:51:25 -- setup/common.sh@33 -- # echo 0 00:05:20.012 07:51:25 -- setup/common.sh@33 -- # return 0 00:05:20.012 07:51:25 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.012 07:51:25 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.012 07:51:25 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.012 07:51:25 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.012 node0=1024 expecting 1024 00:05:20.012 07:51:25 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:20.012 07:51:25 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:20.012 00:05:20.012 real 0m0.540s 00:05:20.012 user 0m0.272s 00:05:20.012 sys 0m0.304s 00:05:20.012 07:51:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.012 07:51:25 -- common/autotest_common.sh@10 -- # set +x 00:05:20.012 ************************************ 00:05:20.012 END TEST even_2G_alloc 00:05:20.012 ************************************ 00:05:20.012 07:51:25 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:05:20.012 07:51:25 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.012 07:51:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.012 07:51:25 -- common/autotest_common.sh@10 -- # set +x 00:05:20.012 ************************************ 00:05:20.012 START TEST odd_alloc 00:05:20.012 ************************************ 00:05:20.012 07:51:25 -- common/autotest_common.sh@1104 -- # odd_alloc 00:05:20.012 07:51:25 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:05:20.012 07:51:25 -- setup/hugepages.sh@49 -- # local size=2098176 00:05:20.012 07:51:25 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:20.012 07:51:25 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.012 07:51:25 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:05:20.012 07:51:25 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:20.012 07:51:25 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.012 07:51:25 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.012 07:51:25 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:05:20.012 07:51:25 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.012 07:51:25 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.012 07:51:25 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.012 07:51:25 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.012 07:51:25 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:20.012 07:51:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.012 07:51:25 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:05:20.012 07:51:25 -- setup/hugepages.sh@83 -- # : 0 00:05:20.012 07:51:25 -- setup/hugepages.sh@84 -- # : 0 00:05:20.012 07:51:25 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.012 07:51:25 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:05:20.012 07:51:25 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:05:20.012 07:51:25 -- setup/hugepages.sh@160 -- # setup output 00:05:20.012 07:51:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.012 07:51:25 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.273 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.273 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.273 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.273 07:51:25 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:05:20.273 07:51:25 -- setup/hugepages.sh@89 -- # local node 00:05:20.273 07:51:25 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.273 07:51:25 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.273 07:51:25 -- setup/hugepages.sh@92 -- # local surp 00:05:20.273 07:51:25 -- setup/hugepages.sh@93 -- # local resv 00:05:20.273 07:51:25 -- setup/hugepages.sh@94 -- # local anon 00:05:20.273 07:51:25 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.273 07:51:25 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.273 07:51:25 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.273 07:51:25 -- setup/common.sh@18 -- # local node= 00:05:20.273 07:51:25 -- setup/common.sh@19 -- # local var val 00:05:20.273 07:51:25 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.273 07:51:25 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.273 07:51:25 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.273 07:51:25 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.273 07:51:25 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.273 07:51:25 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:25 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6978084 kB' 'MemAvailable: 9458552 kB' 'Buffers: 2436 kB' 'Cached: 2686160 kB' 'SwapCached: 0 kB' 'Active: 450484 kB' 'Inactive: 2357228 kB' 'Active(anon): 129584 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120724 kB' 'Mapped: 49216 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157736 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78732 kB' 'KernelStack: 6520 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.273 07:51:25 -- setup/common.sh@32 -- # continue 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:25 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.273 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.273 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.273 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.273 07:51:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.273 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.273 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.273 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.273 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.273 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.273 07:51:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.273 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.273 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.273 07:51:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.273 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.273 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.273 07:51:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.273 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.273 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.273 07:51:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.273 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.273 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.273 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.274 07:51:26 -- setup/common.sh@33 -- # echo 0 00:05:20.274 07:51:26 -- setup/common.sh@33 -- # return 0 00:05:20.274 07:51:26 -- setup/hugepages.sh@97 -- # anon=0 00:05:20.274 07:51:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:20.274 07:51:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.274 07:51:26 -- setup/common.sh@18 -- # local node= 00:05:20.274 07:51:26 -- setup/common.sh@19 -- # local var val 00:05:20.274 07:51:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.274 07:51:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.274 07:51:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.274 07:51:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.274 07:51:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.274 07:51:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6977832 kB' 'MemAvailable: 9458300 kB' 'Buffers: 2436 kB' 'Cached: 2686160 kB' 'SwapCached: 0 kB' 'Active: 450452 kB' 'Inactive: 2357228 kB' 'Active(anon): 129552 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120644 kB' 'Mapped: 49088 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157736 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78732 kB' 'KernelStack: 6512 kB' 'PageTables: 4356 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.274 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.274 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.275 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.275 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.275 07:51:26 -- setup/common.sh@33 -- # echo 0 00:05:20.275 07:51:26 -- setup/common.sh@33 -- # return 0 00:05:20.275 07:51:26 -- setup/hugepages.sh@99 -- # surp=0 00:05:20.275 07:51:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:20.276 07:51:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:20.276 07:51:26 -- setup/common.sh@18 -- # local node= 00:05:20.276 07:51:26 -- setup/common.sh@19 -- # local var val 00:05:20.276 07:51:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.276 07:51:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.276 07:51:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.276 07:51:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.276 07:51:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.276 07:51:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6978412 kB' 'MemAvailable: 9458880 kB' 'Buffers: 2436 kB' 'Cached: 2686160 kB' 'SwapCached: 0 kB' 'Active: 450208 kB' 'Inactive: 2357228 kB' 'Active(anon): 129308 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120432 kB' 'Mapped: 48888 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157732 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78728 kB' 'KernelStack: 6512 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.276 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.276 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:20.277 07:51:26 -- setup/common.sh@33 -- # echo 0 00:05:20.277 07:51:26 -- setup/common.sh@33 -- # return 0 00:05:20.277 07:51:26 -- setup/hugepages.sh@100 -- # resv=0 00:05:20.277 nr_hugepages=1025 00:05:20.277 07:51:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:05:20.277 resv_hugepages=0 00:05:20.277 07:51:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:20.277 surplus_hugepages=0 00:05:20.277 07:51:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:20.277 anon_hugepages=0 00:05:20.277 07:51:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:20.277 07:51:26 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:20.277 07:51:26 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:05:20.277 07:51:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:20.277 07:51:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:20.277 07:51:26 -- setup/common.sh@18 -- # local node= 00:05:20.277 07:51:26 -- setup/common.sh@19 -- # local var val 00:05:20.277 07:51:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.277 07:51:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.277 07:51:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.277 07:51:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.277 07:51:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.277 07:51:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6978412 kB' 'MemAvailable: 9458880 kB' 'Buffers: 2436 kB' 'Cached: 2686160 kB' 'SwapCached: 0 kB' 'Active: 450100 kB' 'Inactive: 2357228 kB' 'Active(anon): 129200 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120328 kB' 'Mapped: 48888 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157732 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78728 kB' 'KernelStack: 6480 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459988 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.277 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.277 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.537 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.537 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:20.538 07:51:26 -- setup/common.sh@33 -- # echo 1025 00:05:20.538 07:51:26 -- setup/common.sh@33 -- # return 0 00:05:20.538 07:51:26 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:05:20.538 07:51:26 -- setup/hugepages.sh@112 -- # get_nodes 00:05:20.538 07:51:26 -- setup/hugepages.sh@27 -- # local node 00:05:20.538 07:51:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:20.538 07:51:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:05:20.538 07:51:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:20.538 07:51:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:20.538 07:51:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:20.538 07:51:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:20.538 07:51:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:20.538 07:51:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:20.538 07:51:26 -- setup/common.sh@18 -- # local node=0 00:05:20.538 07:51:26 -- setup/common.sh@19 -- # local var val 00:05:20.538 07:51:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.538 07:51:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.538 07:51:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:20.538 07:51:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:20.538 07:51:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.538 07:51:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6978664 kB' 'MemUsed: 5263308 kB' 'SwapCached: 0 kB' 'Active: 449920 kB' 'Inactive: 2357228 kB' 'Active(anon): 129020 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357228 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 2688596 kB' 'Mapped: 48888 kB' 'AnonPages: 120180 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79004 kB' 'Slab: 157732 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78728 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.538 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.538 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.539 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.539 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:20.539 07:51:26 -- setup/common.sh@33 -- # echo 0 00:05:20.539 07:51:26 -- setup/common.sh@33 -- # return 0 00:05:20.539 node0=1025 expecting 1025 00:05:20.539 ************************************ 00:05:20.539 END TEST odd_alloc 00:05:20.539 ************************************ 00:05:20.539 07:51:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:20.539 07:51:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:20.539 07:51:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:20.539 07:51:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:20.539 07:51:26 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:05:20.539 07:51:26 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:05:20.539 00:05:20.539 real 0m0.532s 00:05:20.539 user 0m0.275s 00:05:20.539 sys 0m0.279s 00:05:20.539 07:51:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.539 07:51:26 -- common/autotest_common.sh@10 -- # set +x 00:05:20.539 07:51:26 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:05:20.539 07:51:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:20.539 07:51:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:20.539 07:51:26 -- common/autotest_common.sh@10 -- # set +x 00:05:20.539 ************************************ 00:05:20.539 START TEST custom_alloc 00:05:20.539 ************************************ 00:05:20.539 07:51:26 -- common/autotest_common.sh@1104 -- # custom_alloc 00:05:20.539 07:51:26 -- setup/hugepages.sh@167 -- # local IFS=, 00:05:20.539 07:51:26 -- setup/hugepages.sh@169 -- # local node 00:05:20.539 07:51:26 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:05:20.539 07:51:26 -- setup/hugepages.sh@170 -- # local nodes_hp 00:05:20.539 07:51:26 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:05:20.539 07:51:26 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:05:20.539 07:51:26 -- setup/hugepages.sh@49 -- # local size=1048576 00:05:20.539 07:51:26 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:05:20.539 07:51:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:20.539 07:51:26 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:05:20.539 07:51:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:05:20.539 07:51:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.539 07:51:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.539 07:51:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:20.539 07:51:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.539 07:51:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.539 07:51:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.539 07:51:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.539 07:51:26 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:05:20.539 07:51:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.539 07:51:26 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:05:20.539 07:51:26 -- setup/hugepages.sh@83 -- # : 0 00:05:20.539 07:51:26 -- setup/hugepages.sh@84 -- # : 0 00:05:20.539 07:51:26 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:05:20.539 07:51:26 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:05:20.539 07:51:26 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:05:20.539 07:51:26 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:05:20.539 07:51:26 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:05:20.539 07:51:26 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:05:20.539 07:51:26 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:05:20.539 07:51:26 -- setup/hugepages.sh@62 -- # user_nodes=() 00:05:20.539 07:51:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:20.539 07:51:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:05:20.539 07:51:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:20.539 07:51:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:20.539 07:51:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:20.539 07:51:26 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:05:20.539 07:51:26 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:05:20.539 07:51:26 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:05:20.539 07:51:26 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:05:20.539 07:51:26 -- setup/hugepages.sh@78 -- # return 0 00:05:20.539 07:51:26 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:05:20.539 07:51:26 -- setup/hugepages.sh@187 -- # setup output 00:05:20.539 07:51:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:20.539 07:51:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:20.799 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:20.799 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.799 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:20.799 07:51:26 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:05:20.799 07:51:26 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:05:20.799 07:51:26 -- setup/hugepages.sh@89 -- # local node 00:05:20.799 07:51:26 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:20.799 07:51:26 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:20.799 07:51:26 -- setup/hugepages.sh@92 -- # local surp 00:05:20.799 07:51:26 -- setup/hugepages.sh@93 -- # local resv 00:05:20.799 07:51:26 -- setup/hugepages.sh@94 -- # local anon 00:05:20.799 07:51:26 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:20.799 07:51:26 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:20.799 07:51:26 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:20.799 07:51:26 -- setup/common.sh@18 -- # local node= 00:05:20.799 07:51:26 -- setup/common.sh@19 -- # local var val 00:05:20.799 07:51:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:20.799 07:51:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:20.799 07:51:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:20.799 07:51:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:20.799 07:51:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:20.799 07:51:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:20.799 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.799 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.799 07:51:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8032172 kB' 'MemAvailable: 10512644 kB' 'Buffers: 2436 kB' 'Cached: 2686164 kB' 'SwapCached: 0 kB' 'Active: 450412 kB' 'Inactive: 2357232 kB' 'Active(anon): 129512 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120624 kB' 'Mapped: 48976 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157716 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78712 kB' 'KernelStack: 6504 kB' 'PageTables: 4224 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54884 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:20.799 07:51:26 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.799 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.799 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.799 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.799 07:51:26 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.799 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.799 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.800 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.800 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.801 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.801 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.801 07:51:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.801 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.801 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.801 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.801 07:51:26 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.801 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.801 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.801 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.801 07:51:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.801 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.801 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.801 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.801 07:51:26 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.801 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.801 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.801 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.801 07:51:26 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.801 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.801 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.801 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.801 07:51:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.801 07:51:26 -- setup/common.sh@32 -- # continue 00:05:20.801 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:20.801 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:20.801 07:51:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:20.801 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.063 07:51:26 -- setup/common.sh@33 -- # echo 0 00:05:21.063 07:51:26 -- setup/common.sh@33 -- # return 0 00:05:21.063 07:51:26 -- setup/hugepages.sh@97 -- # anon=0 00:05:21.063 07:51:26 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.063 07:51:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.063 07:51:26 -- setup/common.sh@18 -- # local node= 00:05:21.063 07:51:26 -- setup/common.sh@19 -- # local var val 00:05:21.063 07:51:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.063 07:51:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.063 07:51:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.063 07:51:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.063 07:51:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.063 07:51:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8032172 kB' 'MemAvailable: 10512644 kB' 'Buffers: 2436 kB' 'Cached: 2686164 kB' 'SwapCached: 0 kB' 'Active: 450196 kB' 'Inactive: 2357232 kB' 'Active(anon): 129296 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120444 kB' 'Mapped: 48888 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157712 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78708 kB' 'KernelStack: 6512 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.063 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.063 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.064 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.064 07:51:26 -- setup/common.sh@33 -- # echo 0 00:05:21.064 07:51:26 -- setup/common.sh@33 -- # return 0 00:05:21.064 07:51:26 -- setup/hugepages.sh@99 -- # surp=0 00:05:21.064 07:51:26 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.064 07:51:26 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.064 07:51:26 -- setup/common.sh@18 -- # local node= 00:05:21.064 07:51:26 -- setup/common.sh@19 -- # local var val 00:05:21.064 07:51:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.064 07:51:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.064 07:51:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.064 07:51:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.064 07:51:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.064 07:51:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.064 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8032172 kB' 'MemAvailable: 10512644 kB' 'Buffers: 2436 kB' 'Cached: 2686164 kB' 'SwapCached: 0 kB' 'Active: 450244 kB' 'Inactive: 2357232 kB' 'Active(anon): 129344 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120448 kB' 'Mapped: 48888 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157712 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78708 kB' 'KernelStack: 6512 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.065 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.065 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.066 07:51:26 -- setup/common.sh@33 -- # echo 0 00:05:21.066 07:51:26 -- setup/common.sh@33 -- # return 0 00:05:21.066 07:51:26 -- setup/hugepages.sh@100 -- # resv=0 00:05:21.066 nr_hugepages=512 00:05:21.066 07:51:26 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:05:21.066 resv_hugepages=0 00:05:21.066 07:51:26 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.066 surplus_hugepages=0 00:05:21.066 07:51:26 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.066 anon_hugepages=0 00:05:21.066 07:51:26 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.066 07:51:26 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:21.066 07:51:26 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:05:21.066 07:51:26 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.066 07:51:26 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.066 07:51:26 -- setup/common.sh@18 -- # local node= 00:05:21.066 07:51:26 -- setup/common.sh@19 -- # local var val 00:05:21.066 07:51:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.066 07:51:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.066 07:51:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.066 07:51:26 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.066 07:51:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.066 07:51:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8032172 kB' 'MemAvailable: 10512644 kB' 'Buffers: 2436 kB' 'Cached: 2686164 kB' 'SwapCached: 0 kB' 'Active: 450128 kB' 'Inactive: 2357232 kB' 'Active(anon): 129228 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'AnonPages: 120328 kB' 'Mapped: 48888 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157708 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78704 kB' 'KernelStack: 6496 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985300 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.066 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.066 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.067 07:51:26 -- setup/common.sh@33 -- # echo 512 00:05:21.067 07:51:26 -- setup/common.sh@33 -- # return 0 00:05:21.067 07:51:26 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:05:21.067 07:51:26 -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.067 07:51:26 -- setup/hugepages.sh@27 -- # local node 00:05:21.067 07:51:26 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.067 07:51:26 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:05:21.067 07:51:26 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.067 07:51:26 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.067 07:51:26 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.067 07:51:26 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.067 07:51:26 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.067 07:51:26 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.067 07:51:26 -- setup/common.sh@18 -- # local node=0 00:05:21.067 07:51:26 -- setup/common.sh@19 -- # local var val 00:05:21.067 07:51:26 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.067 07:51:26 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.067 07:51:26 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.067 07:51:26 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.067 07:51:26 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.067 07:51:26 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.067 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.067 07:51:26 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 8032172 kB' 'MemUsed: 4209800 kB' 'SwapCached: 0 kB' 'Active: 450252 kB' 'Inactive: 2357232 kB' 'Active(anon): 129352 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 236 kB' 'Writeback: 0 kB' 'FilePages: 2688600 kB' 'Mapped: 48888 kB' 'AnonPages: 120452 kB' 'Shmem: 10468 kB' 'KernelStack: 6512 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79004 kB' 'Slab: 157708 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78704 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # continue 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.068 07:51:26 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.068 07:51:26 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.068 07:51:26 -- setup/common.sh@33 -- # echo 0 00:05:21.068 07:51:26 -- setup/common.sh@33 -- # return 0 00:05:21.068 07:51:26 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.068 07:51:26 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.068 07:51:26 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.068 07:51:26 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.068 node0=512 expecting 512 00:05:21.068 07:51:26 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:05:21.068 07:51:26 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:05:21.068 00:05:21.068 real 0m0.551s 00:05:21.068 user 0m0.275s 00:05:21.068 sys 0m0.310s 00:05:21.068 07:51:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.068 07:51:26 -- common/autotest_common.sh@10 -- # set +x 00:05:21.068 ************************************ 00:05:21.068 END TEST custom_alloc 00:05:21.068 ************************************ 00:05:21.068 07:51:26 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:05:21.068 07:51:26 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:21.068 07:51:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:21.068 07:51:26 -- common/autotest_common.sh@10 -- # set +x 00:05:21.068 ************************************ 00:05:21.068 START TEST no_shrink_alloc 00:05:21.069 ************************************ 00:05:21.069 07:51:26 -- common/autotest_common.sh@1104 -- # no_shrink_alloc 00:05:21.069 07:51:26 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:05:21.069 07:51:26 -- setup/hugepages.sh@49 -- # local size=2097152 00:05:21.069 07:51:26 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:05:21.069 07:51:26 -- setup/hugepages.sh@51 -- # shift 00:05:21.069 07:51:26 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:05:21.069 07:51:26 -- setup/hugepages.sh@52 -- # local node_ids 00:05:21.069 07:51:26 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:05:21.069 07:51:26 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:05:21.069 07:51:26 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:05:21.069 07:51:26 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:05:21.069 07:51:26 -- setup/hugepages.sh@62 -- # local user_nodes 00:05:21.069 07:51:26 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:05:21.069 07:51:26 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:05:21.069 07:51:26 -- setup/hugepages.sh@67 -- # nodes_test=() 00:05:21.069 07:51:26 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:05:21.069 07:51:26 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:05:21.069 07:51:26 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:05:21.069 07:51:26 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:05:21.069 07:51:26 -- setup/hugepages.sh@73 -- # return 0 00:05:21.069 07:51:26 -- setup/hugepages.sh@198 -- # setup output 00:05:21.069 07:51:26 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.069 07:51:26 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:21.328 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:21.591 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.591 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:21.591 07:51:27 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:05:21.591 07:51:27 -- setup/hugepages.sh@89 -- # local node 00:05:21.591 07:51:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:21.591 07:51:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:21.591 07:51:27 -- setup/hugepages.sh@92 -- # local surp 00:05:21.591 07:51:27 -- setup/hugepages.sh@93 -- # local resv 00:05:21.591 07:51:27 -- setup/hugepages.sh@94 -- # local anon 00:05:21.591 07:51:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:21.591 07:51:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:21.591 07:51:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:21.591 07:51:27 -- setup/common.sh@18 -- # local node= 00:05:21.591 07:51:27 -- setup/common.sh@19 -- # local var val 00:05:21.591 07:51:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.591 07:51:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.591 07:51:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.591 07:51:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.591 07:51:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.591 07:51:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6984260 kB' 'MemAvailable: 9464732 kB' 'Buffers: 2436 kB' 'Cached: 2686164 kB' 'SwapCached: 0 kB' 'Active: 450340 kB' 'Inactive: 2357232 kB' 'Active(anon): 129440 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120856 kB' 'Mapped: 48940 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157696 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78692 kB' 'KernelStack: 6544 kB' 'PageTables: 4452 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.591 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.591 07:51:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:21.592 07:51:27 -- setup/common.sh@33 -- # echo 0 00:05:21.592 07:51:27 -- setup/common.sh@33 -- # return 0 00:05:21.592 07:51:27 -- setup/hugepages.sh@97 -- # anon=0 00:05:21.592 07:51:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:21.592 07:51:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.592 07:51:27 -- setup/common.sh@18 -- # local node= 00:05:21.592 07:51:27 -- setup/common.sh@19 -- # local var val 00:05:21.592 07:51:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.592 07:51:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.592 07:51:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.592 07:51:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.592 07:51:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.592 07:51:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6984260 kB' 'MemAvailable: 9464732 kB' 'Buffers: 2436 kB' 'Cached: 2686164 kB' 'SwapCached: 0 kB' 'Active: 450152 kB' 'Inactive: 2357232 kB' 'Active(anon): 129252 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120408 kB' 'Mapped: 48940 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157696 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78692 kB' 'KernelStack: 6528 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.592 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.592 07:51:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.593 07:51:27 -- setup/common.sh@33 -- # echo 0 00:05:21.593 07:51:27 -- setup/common.sh@33 -- # return 0 00:05:21.593 07:51:27 -- setup/hugepages.sh@99 -- # surp=0 00:05:21.593 07:51:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:21.593 07:51:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:21.593 07:51:27 -- setup/common.sh@18 -- # local node= 00:05:21.593 07:51:27 -- setup/common.sh@19 -- # local var val 00:05:21.593 07:51:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.593 07:51:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.593 07:51:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.593 07:51:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.593 07:51:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.593 07:51:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6984260 kB' 'MemAvailable: 9464732 kB' 'Buffers: 2436 kB' 'Cached: 2686164 kB' 'SwapCached: 0 kB' 'Active: 450004 kB' 'Inactive: 2357232 kB' 'Active(anon): 129104 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120508 kB' 'Mapped: 48888 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157692 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78688 kB' 'KernelStack: 6560 kB' 'PageTables: 4500 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54820 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.593 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.593 07:51:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.594 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.594 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:21.595 07:51:27 -- setup/common.sh@33 -- # echo 0 00:05:21.595 07:51:27 -- setup/common.sh@33 -- # return 0 00:05:21.595 07:51:27 -- setup/hugepages.sh@100 -- # resv=0 00:05:21.595 07:51:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:21.595 nr_hugepages=1024 00:05:21.595 resv_hugepages=0 00:05:21.595 07:51:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:21.595 surplus_hugepages=0 00:05:21.595 07:51:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:21.595 anon_hugepages=0 00:05:21.595 07:51:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:21.595 07:51:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.595 07:51:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:21.595 07:51:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:21.595 07:51:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:21.595 07:51:27 -- setup/common.sh@18 -- # local node= 00:05:21.595 07:51:27 -- setup/common.sh@19 -- # local var val 00:05:21.595 07:51:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.595 07:51:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.595 07:51:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:21.595 07:51:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:21.595 07:51:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.595 07:51:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6984260 kB' 'MemAvailable: 9464732 kB' 'Buffers: 2436 kB' 'Cached: 2686164 kB' 'SwapCached: 0 kB' 'Active: 450048 kB' 'Inactive: 2357232 kB' 'Active(anon): 129148 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120324 kB' 'Mapped: 48888 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157676 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78672 kB' 'KernelStack: 6476 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54772 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.595 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.595 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:21.596 07:51:27 -- setup/common.sh@33 -- # echo 1024 00:05:21.596 07:51:27 -- setup/common.sh@33 -- # return 0 00:05:21.596 07:51:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:21.596 07:51:27 -- setup/hugepages.sh@112 -- # get_nodes 00:05:21.596 07:51:27 -- setup/hugepages.sh@27 -- # local node 00:05:21.596 07:51:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:21.596 07:51:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:21.596 07:51:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:21.596 07:51:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:21.596 07:51:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:21.596 07:51:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:21.596 07:51:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:21.596 07:51:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:21.596 07:51:27 -- setup/common.sh@18 -- # local node=0 00:05:21.596 07:51:27 -- setup/common.sh@19 -- # local var val 00:05:21.596 07:51:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:21.596 07:51:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:21.596 07:51:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:21.596 07:51:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:21.596 07:51:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:21.596 07:51:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6984260 kB' 'MemUsed: 5257712 kB' 'SwapCached: 0 kB' 'Active: 450052 kB' 'Inactive: 2357232 kB' 'Active(anon): 129152 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2688600 kB' 'Mapped: 48888 kB' 'AnonPages: 120404 kB' 'Shmem: 10468 kB' 'KernelStack: 6524 kB' 'PageTables: 4348 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79004 kB' 'Slab: 157676 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78672 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.596 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.596 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # continue 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:21.597 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:21.597 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:21.597 07:51:27 -- setup/common.sh@33 -- # echo 0 00:05:21.597 07:51:27 -- setup/common.sh@33 -- # return 0 00:05:21.597 07:51:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:21.597 07:51:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:21.597 07:51:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:21.597 07:51:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:21.597 node0=1024 expecting 1024 00:05:21.597 07:51:27 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:21.597 07:51:27 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:21.597 07:51:27 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:05:21.597 07:51:27 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:05:21.597 07:51:27 -- setup/hugepages.sh@202 -- # setup output 00:05:21.597 07:51:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:21.597 07:51:27 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:22.168 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:22.168 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.168 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:05:22.168 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:05:22.168 07:51:27 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:05:22.168 07:51:27 -- setup/hugepages.sh@89 -- # local node 00:05:22.168 07:51:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:05:22.168 07:51:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:05:22.168 07:51:27 -- setup/hugepages.sh@92 -- # local surp 00:05:22.168 07:51:27 -- setup/hugepages.sh@93 -- # local resv 00:05:22.168 07:51:27 -- setup/hugepages.sh@94 -- # local anon 00:05:22.168 07:51:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:05:22.168 07:51:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:05:22.168 07:51:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:05:22.168 07:51:27 -- setup/common.sh@18 -- # local node= 00:05:22.168 07:51:27 -- setup/common.sh@19 -- # local var val 00:05:22.168 07:51:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.168 07:51:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.168 07:51:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.168 07:51:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.168 07:51:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.168 07:51:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.168 07:51:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6979244 kB' 'MemAvailable: 9459716 kB' 'Buffers: 2436 kB' 'Cached: 2686164 kB' 'SwapCached: 0 kB' 'Active: 450764 kB' 'Inactive: 2357232 kB' 'Active(anon): 129864 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120976 kB' 'Mapped: 49268 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157684 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78680 kB' 'KernelStack: 6596 kB' 'PageTables: 4676 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54900 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.168 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.168 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.169 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.169 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:05:22.170 07:51:27 -- setup/common.sh@33 -- # echo 0 00:05:22.170 07:51:27 -- setup/common.sh@33 -- # return 0 00:05:22.170 07:51:27 -- setup/hugepages.sh@97 -- # anon=0 00:05:22.170 07:51:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:05:22.170 07:51:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.170 07:51:27 -- setup/common.sh@18 -- # local node= 00:05:22.170 07:51:27 -- setup/common.sh@19 -- # local var val 00:05:22.170 07:51:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.170 07:51:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.170 07:51:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.170 07:51:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.170 07:51:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.170 07:51:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6979244 kB' 'MemAvailable: 9459716 kB' 'Buffers: 2436 kB' 'Cached: 2686164 kB' 'SwapCached: 0 kB' 'Active: 450564 kB' 'Inactive: 2357232 kB' 'Active(anon): 129664 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120760 kB' 'Mapped: 48896 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157684 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78680 kB' 'KernelStack: 6524 kB' 'PageTables: 4580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.170 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.170 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.171 07:51:27 -- setup/common.sh@33 -- # echo 0 00:05:22.171 07:51:27 -- setup/common.sh@33 -- # return 0 00:05:22.171 07:51:27 -- setup/hugepages.sh@99 -- # surp=0 00:05:22.171 07:51:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:05:22.171 07:51:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:05:22.171 07:51:27 -- setup/common.sh@18 -- # local node= 00:05:22.171 07:51:27 -- setup/common.sh@19 -- # local var val 00:05:22.171 07:51:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.171 07:51:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.171 07:51:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.171 07:51:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.171 07:51:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.171 07:51:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.171 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.171 07:51:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6978992 kB' 'MemAvailable: 9459464 kB' 'Buffers: 2436 kB' 'Cached: 2686164 kB' 'SwapCached: 0 kB' 'Active: 450220 kB' 'Inactive: 2357232 kB' 'Active(anon): 129320 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120668 kB' 'Mapped: 48896 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157684 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78680 kB' 'KernelStack: 6524 kB' 'PageTables: 4584 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.172 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.172 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:05:22.173 07:51:27 -- setup/common.sh@33 -- # echo 0 00:05:22.173 07:51:27 -- setup/common.sh@33 -- # return 0 00:05:22.173 07:51:27 -- setup/hugepages.sh@100 -- # resv=0 00:05:22.173 07:51:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:05:22.173 nr_hugepages=1024 00:05:22.173 resv_hugepages=0 00:05:22.173 07:51:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:05:22.173 surplus_hugepages=0 00:05:22.173 07:51:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:05:22.173 anon_hugepages=0 00:05:22.173 07:51:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:05:22.173 07:51:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.173 07:51:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:05:22.173 07:51:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:05:22.173 07:51:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:05:22.173 07:51:27 -- setup/common.sh@18 -- # local node= 00:05:22.173 07:51:27 -- setup/common.sh@19 -- # local var val 00:05:22.173 07:51:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.173 07:51:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.173 07:51:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:05:22.173 07:51:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:05:22.173 07:51:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.173 07:51:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.173 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.173 07:51:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6978992 kB' 'MemAvailable: 9459464 kB' 'Buffers: 2436 kB' 'Cached: 2686164 kB' 'SwapCached: 0 kB' 'Active: 450076 kB' 'Inactive: 2357232 kB' 'Active(anon): 129176 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'AnonPages: 120380 kB' 'Mapped: 48896 kB' 'Shmem: 10468 kB' 'KReclaimable: 79004 kB' 'Slab: 157680 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78676 kB' 'KernelStack: 6540 kB' 'PageTables: 4392 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461012 kB' 'Committed_AS: 350732 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6096 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 175980 kB' 'DirectMap2M: 6115328 kB' 'DirectMap1G: 8388608 kB' 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.174 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.174 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.175 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:05:22.175 07:51:27 -- setup/common.sh@33 -- # echo 1024 00:05:22.175 07:51:27 -- setup/common.sh@33 -- # return 0 00:05:22.175 07:51:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:05:22.175 07:51:27 -- setup/hugepages.sh@112 -- # get_nodes 00:05:22.175 07:51:27 -- setup/hugepages.sh@27 -- # local node 00:05:22.175 07:51:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:05:22.175 07:51:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:05:22.175 07:51:27 -- setup/hugepages.sh@32 -- # no_nodes=1 00:05:22.175 07:51:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:05:22.175 07:51:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:05:22.175 07:51:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:05:22.175 07:51:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:05:22.175 07:51:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:05:22.175 07:51:27 -- setup/common.sh@18 -- # local node=0 00:05:22.175 07:51:27 -- setup/common.sh@19 -- # local var val 00:05:22.175 07:51:27 -- setup/common.sh@20 -- # local mem_f mem 00:05:22.175 07:51:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:05:22.175 07:51:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:05:22.175 07:51:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:05:22.175 07:51:27 -- setup/common.sh@28 -- # mapfile -t mem 00:05:22.175 07:51:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:05:22.175 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241972 kB' 'MemFree: 6978992 kB' 'MemUsed: 5262980 kB' 'SwapCached: 0 kB' 'Active: 449972 kB' 'Inactive: 2357232 kB' 'Active(anon): 129072 kB' 'Inactive(anon): 0 kB' 'Active(file): 320900 kB' 'Inactive(file): 2357232 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 132 kB' 'Writeback: 0 kB' 'FilePages: 2688600 kB' 'Mapped: 48896 kB' 'AnonPages: 120272 kB' 'Shmem: 10468 kB' 'KernelStack: 6540 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 79004 kB' 'Slab: 157680 kB' 'SReclaimable: 79004 kB' 'SUnreclaim: 78676 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.176 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.176 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.177 07:51:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.177 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.177 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.177 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.177 07:51:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.177 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.177 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.177 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.177 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.177 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.177 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.177 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.177 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.177 07:51:27 -- setup/common.sh@32 -- # continue 00:05:22.177 07:51:27 -- setup/common.sh@31 -- # IFS=': ' 00:05:22.177 07:51:27 -- setup/common.sh@31 -- # read -r var val _ 00:05:22.177 07:51:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:05:22.177 07:51:27 -- setup/common.sh@33 -- # echo 0 00:05:22.177 07:51:27 -- setup/common.sh@33 -- # return 0 00:05:22.177 07:51:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:05:22.177 07:51:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:05:22.177 07:51:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:05:22.177 07:51:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:05:22.177 node0=1024 expecting 1024 00:05:22.177 07:51:27 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:05:22.177 07:51:27 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:05:22.177 00:05:22.177 real 0m1.129s 00:05:22.177 user 0m0.553s 00:05:22.177 sys 0m0.621s 00:05:22.177 07:51:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.177 07:51:27 -- common/autotest_common.sh@10 -- # set +x 00:05:22.177 ************************************ 00:05:22.177 END TEST no_shrink_alloc 00:05:22.177 ************************************ 00:05:22.177 07:51:27 -- setup/hugepages.sh@217 -- # clear_hp 00:05:22.177 07:51:27 -- setup/hugepages.sh@37 -- # local node hp 00:05:22.177 07:51:27 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:05:22.177 07:51:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:22.177 07:51:27 -- setup/hugepages.sh@41 -- # echo 0 00:05:22.177 07:51:27 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:05:22.177 07:51:27 -- setup/hugepages.sh@41 -- # echo 0 00:05:22.177 07:51:27 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:05:22.177 07:51:27 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:05:22.435 ************************************ 00:05:22.435 END TEST hugepages 00:05:22.435 ************************************ 00:05:22.435 00:05:22.435 real 0m4.795s 00:05:22.435 user 0m2.295s 00:05:22.435 sys 0m2.600s 00:05:22.435 07:51:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.435 07:51:27 -- common/autotest_common.sh@10 -- # set +x 00:05:22.435 07:51:28 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:22.435 07:51:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:22.435 07:51:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:22.435 07:51:28 -- common/autotest_common.sh@10 -- # set +x 00:05:22.435 ************************************ 00:05:22.435 START TEST driver 00:05:22.435 ************************************ 00:05:22.435 07:51:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:05:22.435 * Looking for test storage... 00:05:22.435 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:22.435 07:51:28 -- setup/driver.sh@68 -- # setup reset 00:05:22.435 07:51:28 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:22.435 07:51:28 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:23.003 07:51:28 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:05:23.003 07:51:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:23.003 07:51:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:23.003 07:51:28 -- common/autotest_common.sh@10 -- # set +x 00:05:23.003 ************************************ 00:05:23.003 START TEST guess_driver 00:05:23.003 ************************************ 00:05:23.003 07:51:28 -- common/autotest_common.sh@1104 -- # guess_driver 00:05:23.003 07:51:28 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:05:23.003 07:51:28 -- setup/driver.sh@47 -- # local fail=0 00:05:23.003 07:51:28 -- setup/driver.sh@49 -- # pick_driver 00:05:23.003 07:51:28 -- setup/driver.sh@36 -- # vfio 00:05:23.003 07:51:28 -- setup/driver.sh@21 -- # local iommu_grups 00:05:23.003 07:51:28 -- setup/driver.sh@22 -- # local unsafe_vfio 00:05:23.003 07:51:28 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:05:23.003 07:51:28 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:05:23.003 07:51:28 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:05:23.003 07:51:28 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:05:23.003 07:51:28 -- setup/driver.sh@32 -- # return 1 00:05:23.003 07:51:28 -- setup/driver.sh@38 -- # uio 00:05:23.003 07:51:28 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:05:23.003 07:51:28 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:05:23.003 07:51:28 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:05:23.003 07:51:28 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:05:23.003 07:51:28 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:05:23.003 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:05:23.003 07:51:28 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:05:23.003 07:51:28 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:05:23.003 07:51:28 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:05:23.003 Looking for driver=uio_pci_generic 00:05:23.003 07:51:28 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:05:23.003 07:51:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.003 07:51:28 -- setup/driver.sh@45 -- # setup output config 00:05:23.003 07:51:28 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:23.003 07:51:28 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:23.585 07:51:29 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:05:23.585 07:51:29 -- setup/driver.sh@58 -- # continue 00:05:23.585 07:51:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.881 07:51:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.881 07:51:29 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:23.881 07:51:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.881 07:51:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:23.881 07:51:29 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:05:23.881 07:51:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:23.881 07:51:29 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:23.881 07:51:29 -- setup/driver.sh@65 -- # setup reset 00:05:23.881 07:51:29 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:23.881 07:51:29 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:24.456 ************************************ 00:05:24.456 END TEST guess_driver 00:05:24.456 ************************************ 00:05:24.456 00:05:24.456 real 0m1.460s 00:05:24.456 user 0m0.575s 00:05:24.456 sys 0m0.869s 00:05:24.456 07:51:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.456 07:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:24.456 00:05:24.456 real 0m2.134s 00:05:24.456 user 0m0.812s 00:05:24.456 sys 0m1.360s 00:05:24.456 07:51:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:24.456 07:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:24.456 ************************************ 00:05:24.456 END TEST driver 00:05:24.456 ************************************ 00:05:24.456 07:51:30 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:24.456 07:51:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:24.456 07:51:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:24.456 07:51:30 -- common/autotest_common.sh@10 -- # set +x 00:05:24.456 ************************************ 00:05:24.456 START TEST devices 00:05:24.456 ************************************ 00:05:24.456 07:51:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:05:24.715 * Looking for test storage... 00:05:24.715 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:05:24.715 07:51:30 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:24.715 07:51:30 -- setup/devices.sh@192 -- # setup reset 00:05:24.715 07:51:30 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:24.715 07:51:30 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:25.283 07:51:31 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:25.283 07:51:31 -- common/autotest_common.sh@1654 -- # zoned_devs=() 00:05:25.283 07:51:31 -- common/autotest_common.sh@1654 -- # local -gA zoned_devs 00:05:25.283 07:51:31 -- common/autotest_common.sh@1655 -- # local nvme bdf 00:05:25.283 07:51:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:25.283 07:51:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme0n1 00:05:25.283 07:51:31 -- common/autotest_common.sh@1647 -- # local device=nvme0n1 00:05:25.283 07:51:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:25.283 07:51:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:25.283 07:51:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:25.283 07:51:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n1 00:05:25.283 07:51:31 -- common/autotest_common.sh@1647 -- # local device=nvme1n1 00:05:25.283 07:51:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:25.283 07:51:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:25.283 07:51:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:25.283 07:51:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n2 00:05:25.283 07:51:31 -- common/autotest_common.sh@1647 -- # local device=nvme1n2 00:05:25.283 07:51:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:25.283 07:51:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:25.283 07:51:31 -- common/autotest_common.sh@1657 -- # for nvme in /sys/block/nvme* 00:05:25.283 07:51:31 -- common/autotest_common.sh@1658 -- # is_block_zoned nvme1n3 00:05:25.283 07:51:31 -- common/autotest_common.sh@1647 -- # local device=nvme1n3 00:05:25.283 07:51:31 -- common/autotest_common.sh@1649 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:25.283 07:51:31 -- common/autotest_common.sh@1650 -- # [[ none != none ]] 00:05:25.283 07:51:31 -- setup/devices.sh@196 -- # blocks=() 00:05:25.283 07:51:31 -- setup/devices.sh@196 -- # declare -a blocks 00:05:25.283 07:51:31 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:25.283 07:51:31 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:25.283 07:51:31 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:25.283 07:51:31 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:25.283 07:51:31 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:25.283 07:51:31 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:25.283 07:51:31 -- setup/devices.sh@202 -- # pci=0000:00:06.0 00:05:25.283 07:51:31 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\6\.\0* ]] 00:05:25.283 07:51:31 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:25.283 07:51:31 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:25.283 07:51:31 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:05:25.283 No valid GPT data, bailing 00:05:25.283 07:51:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:25.283 07:51:31 -- scripts/common.sh@393 -- # pt= 00:05:25.283 07:51:31 -- scripts/common.sh@394 -- # return 1 00:05:25.283 07:51:31 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:25.283 07:51:31 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:25.283 07:51:31 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:25.283 07:51:31 -- setup/common.sh@80 -- # echo 5368709120 00:05:25.283 07:51:31 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:05:25.283 07:51:31 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:25.283 07:51:31 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:06.0 00:05:25.283 07:51:31 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:25.283 07:51:31 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:05:25.283 07:51:31 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:25.283 07:51:31 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:25.283 07:51:31 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:25.283 07:51:31 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:05:25.283 07:51:31 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:05:25.283 07:51:31 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:05:25.541 No valid GPT data, bailing 00:05:25.541 07:51:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:25.541 07:51:31 -- scripts/common.sh@393 -- # pt= 00:05:25.541 07:51:31 -- scripts/common.sh@394 -- # return 1 00:05:25.541 07:51:31 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:05:25.541 07:51:31 -- setup/common.sh@76 -- # local dev=nvme1n1 00:05:25.541 07:51:31 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:05:25.541 07:51:31 -- setup/common.sh@80 -- # echo 4294967296 00:05:25.541 07:51:31 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:25.541 07:51:31 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:25.541 07:51:31 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:25.541 07:51:31 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:25.541 07:51:31 -- setup/devices.sh@201 -- # ctrl=nvme1n2 00:05:25.541 07:51:31 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:25.541 07:51:31 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:25.541 07:51:31 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:25.541 07:51:31 -- setup/devices.sh@204 -- # block_in_use nvme1n2 00:05:25.541 07:51:31 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:05:25.541 07:51:31 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:05:25.541 No valid GPT data, bailing 00:05:25.541 07:51:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:25.541 07:51:31 -- scripts/common.sh@393 -- # pt= 00:05:25.541 07:51:31 -- scripts/common.sh@394 -- # return 1 00:05:25.541 07:51:31 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n2 00:05:25.541 07:51:31 -- setup/common.sh@76 -- # local dev=nvme1n2 00:05:25.541 07:51:31 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n2 ]] 00:05:25.541 07:51:31 -- setup/common.sh@80 -- # echo 4294967296 00:05:25.541 07:51:31 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:25.541 07:51:31 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:25.541 07:51:31 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:25.541 07:51:31 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:25.541 07:51:31 -- setup/devices.sh@201 -- # ctrl=nvme1n3 00:05:25.541 07:51:31 -- setup/devices.sh@201 -- # ctrl=nvme1 00:05:25.541 07:51:31 -- setup/devices.sh@202 -- # pci=0000:00:07.0 00:05:25.541 07:51:31 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\0\7\.\0* ]] 00:05:25.541 07:51:31 -- setup/devices.sh@204 -- # block_in_use nvme1n3 00:05:25.541 07:51:31 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:05:25.541 07:51:31 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:05:25.541 No valid GPT data, bailing 00:05:25.541 07:51:31 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:25.541 07:51:31 -- scripts/common.sh@393 -- # pt= 00:05:25.541 07:51:31 -- scripts/common.sh@394 -- # return 1 00:05:25.541 07:51:31 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n3 00:05:25.541 07:51:31 -- setup/common.sh@76 -- # local dev=nvme1n3 00:05:25.541 07:51:31 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n3 ]] 00:05:25.541 07:51:31 -- setup/common.sh@80 -- # echo 4294967296 00:05:25.541 07:51:31 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:05:25.541 07:51:31 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:25.541 07:51:31 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:07.0 00:05:25.541 07:51:31 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:05:25.541 07:51:31 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:25.541 07:51:31 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:25.541 07:51:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:25.541 07:51:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:25.541 07:51:31 -- common/autotest_common.sh@10 -- # set +x 00:05:25.541 ************************************ 00:05:25.541 START TEST nvme_mount 00:05:25.541 ************************************ 00:05:25.541 07:51:31 -- common/autotest_common.sh@1104 -- # nvme_mount 00:05:25.541 07:51:31 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:25.541 07:51:31 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:25.541 07:51:31 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:25.541 07:51:31 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:25.541 07:51:31 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:25.541 07:51:31 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:25.541 07:51:31 -- setup/common.sh@40 -- # local part_no=1 00:05:25.541 07:51:31 -- setup/common.sh@41 -- # local size=1073741824 00:05:25.541 07:51:31 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:25.541 07:51:31 -- setup/common.sh@44 -- # parts=() 00:05:25.541 07:51:31 -- setup/common.sh@44 -- # local parts 00:05:25.541 07:51:31 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:25.541 07:51:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.541 07:51:31 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:25.541 07:51:31 -- setup/common.sh@46 -- # (( part++ )) 00:05:25.541 07:51:31 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:25.541 07:51:31 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:25.541 07:51:31 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:25.541 07:51:31 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:26.917 Creating new GPT entries in memory. 00:05:26.917 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:26.917 other utilities. 00:05:26.917 07:51:32 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:26.917 07:51:32 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:26.917 07:51:32 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:26.917 07:51:32 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:26.917 07:51:32 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:27.861 Creating new GPT entries in memory. 00:05:27.861 The operation has completed successfully. 00:05:27.861 07:51:33 -- setup/common.sh@57 -- # (( part++ )) 00:05:27.861 07:51:33 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:27.861 07:51:33 -- setup/common.sh@62 -- # wait 63756 00:05:27.861 07:51:33 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.861 07:51:33 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:05:27.861 07:51:33 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.861 07:51:33 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:27.861 07:51:33 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:27.861 07:51:33 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.861 07:51:33 -- setup/devices.sh@105 -- # verify 0000:00:06.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:27.861 07:51:33 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:27.861 07:51:33 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:27.861 07:51:33 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:27.861 07:51:33 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:27.861 07:51:33 -- setup/devices.sh@53 -- # local found=0 00:05:27.861 07:51:33 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:27.861 07:51:33 -- setup/devices.sh@56 -- # : 00:05:27.861 07:51:33 -- setup/devices.sh@59 -- # local pci status 00:05:27.861 07:51:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.861 07:51:33 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:27.861 07:51:33 -- setup/devices.sh@47 -- # setup output config 00:05:27.861 07:51:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:27.861 07:51:33 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:27.861 07:51:33 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.861 07:51:33 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:27.861 07:51:33 -- setup/devices.sh@63 -- # found=1 00:05:27.861 07:51:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:27.861 07:51:33 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:27.861 07:51:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.119 07:51:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.119 07:51:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.377 07:51:33 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.377 07:51:33 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.377 07:51:34 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:28.377 07:51:34 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:28.377 07:51:34 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.377 07:51:34 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:28.377 07:51:34 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:28.377 07:51:34 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:28.377 07:51:34 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.377 07:51:34 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.377 07:51:34 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:28.377 07:51:34 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:28.377 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:28.377 07:51:34 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:28.377 07:51:34 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:28.636 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:28.636 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:28.636 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:28.636 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:28.636 07:51:34 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:05:28.636 07:51:34 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:05:28.636 07:51:34 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.636 07:51:34 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:28.636 07:51:34 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:28.636 07:51:34 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.636 07:51:34 -- setup/devices.sh@116 -- # verify 0000:00:06.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:28.636 07:51:34 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:28.636 07:51:34 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:28.636 07:51:34 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:28.636 07:51:34 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:28.636 07:51:34 -- setup/devices.sh@53 -- # local found=0 00:05:28.636 07:51:34 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:28.636 07:51:34 -- setup/devices.sh@56 -- # : 00:05:28.636 07:51:34 -- setup/devices.sh@59 -- # local pci status 00:05:28.636 07:51:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.636 07:51:34 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:28.636 07:51:34 -- setup/devices.sh@47 -- # setup output config 00:05:28.636 07:51:34 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:28.636 07:51:34 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:28.895 07:51:34 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.895 07:51:34 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:28.895 07:51:34 -- setup/devices.sh@63 -- # found=1 00:05:28.895 07:51:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:28.896 07:51:34 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:28.896 07:51:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.155 07:51:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:29.155 07:51:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.414 07:51:34 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:29.415 07:51:34 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.415 07:51:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:29.415 07:51:35 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:05:29.415 07:51:35 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.415 07:51:35 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:29.415 07:51:35 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:05:29.415 07:51:35 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:29.415 07:51:35 -- setup/devices.sh@125 -- # verify 0000:00:06.0 data@nvme0n1 '' '' 00:05:29.415 07:51:35 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:29.415 07:51:35 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:29.415 07:51:35 -- setup/devices.sh@50 -- # local mount_point= 00:05:29.415 07:51:35 -- setup/devices.sh@51 -- # local test_file= 00:05:29.415 07:51:35 -- setup/devices.sh@53 -- # local found=0 00:05:29.415 07:51:35 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:29.415 07:51:35 -- setup/devices.sh@59 -- # local pci status 00:05:29.415 07:51:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.415 07:51:35 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:29.415 07:51:35 -- setup/devices.sh@47 -- # setup output config 00:05:29.415 07:51:35 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.415 07:51:35 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:29.674 07:51:35 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:29.674 07:51:35 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:29.674 07:51:35 -- setup/devices.sh@63 -- # found=1 00:05:29.674 07:51:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.674 07:51:35 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:29.674 07:51:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.933 07:51:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:29.933 07:51:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.933 07:51:35 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:29.933 07:51:35 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:30.192 07:51:35 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:30.192 07:51:35 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:30.192 07:51:35 -- setup/devices.sh@68 -- # return 0 00:05:30.192 07:51:35 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:30.192 07:51:35 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:30.192 07:51:35 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:30.192 07:51:35 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:30.192 07:51:35 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:30.192 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:30.192 00:05:30.192 real 0m4.464s 00:05:30.192 user 0m1.025s 00:05:30.192 sys 0m1.143s 00:05:30.192 07:51:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.192 07:51:35 -- common/autotest_common.sh@10 -- # set +x 00:05:30.192 ************************************ 00:05:30.192 END TEST nvme_mount 00:05:30.192 ************************************ 00:05:30.192 07:51:35 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:30.192 07:51:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:30.192 07:51:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:30.192 07:51:35 -- common/autotest_common.sh@10 -- # set +x 00:05:30.192 ************************************ 00:05:30.192 START TEST dm_mount 00:05:30.192 ************************************ 00:05:30.192 07:51:35 -- common/autotest_common.sh@1104 -- # dm_mount 00:05:30.192 07:51:35 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:30.192 07:51:35 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:30.192 07:51:35 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:30.192 07:51:35 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:30.192 07:51:35 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:30.192 07:51:35 -- setup/common.sh@40 -- # local part_no=2 00:05:30.192 07:51:35 -- setup/common.sh@41 -- # local size=1073741824 00:05:30.192 07:51:35 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:30.192 07:51:35 -- setup/common.sh@44 -- # parts=() 00:05:30.192 07:51:35 -- setup/common.sh@44 -- # local parts 00:05:30.192 07:51:35 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:30.192 07:51:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.192 07:51:35 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:30.192 07:51:35 -- setup/common.sh@46 -- # (( part++ )) 00:05:30.192 07:51:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.192 07:51:35 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:30.192 07:51:35 -- setup/common.sh@46 -- # (( part++ )) 00:05:30.192 07:51:35 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:30.192 07:51:35 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:05:30.192 07:51:35 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:30.192 07:51:35 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:31.130 Creating new GPT entries in memory. 00:05:31.130 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:31.130 other utilities. 00:05:31.130 07:51:36 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:31.130 07:51:36 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:31.130 07:51:36 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:31.130 07:51:36 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:31.130 07:51:36 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:05:32.505 Creating new GPT entries in memory. 00:05:32.505 The operation has completed successfully. 00:05:32.505 07:51:37 -- setup/common.sh@57 -- # (( part++ )) 00:05:32.505 07:51:37 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:32.505 07:51:37 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:32.505 07:51:37 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:32.505 07:51:37 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:05:33.441 The operation has completed successfully. 00:05:33.441 07:51:38 -- setup/common.sh@57 -- # (( part++ )) 00:05:33.441 07:51:38 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:33.441 07:51:38 -- setup/common.sh@62 -- # wait 64215 00:05:33.441 07:51:38 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:33.441 07:51:38 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.441 07:51:38 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:33.441 07:51:38 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:33.441 07:51:38 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:33.441 07:51:38 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:33.441 07:51:38 -- setup/devices.sh@161 -- # break 00:05:33.441 07:51:38 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:33.441 07:51:38 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:33.441 07:51:38 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:33.441 07:51:38 -- setup/devices.sh@166 -- # dm=dm-0 00:05:33.441 07:51:38 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:33.441 07:51:38 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:33.441 07:51:38 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.441 07:51:38 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:05:33.441 07:51:38 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.441 07:51:38 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:33.441 07:51:38 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:33.441 07:51:39 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.441 07:51:39 -- setup/devices.sh@174 -- # verify 0000:00:06.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:33.441 07:51:39 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:33.441 07:51:39 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:33.441 07:51:39 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.441 07:51:39 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:33.441 07:51:39 -- setup/devices.sh@53 -- # local found=0 00:05:33.441 07:51:39 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:33.441 07:51:39 -- setup/devices.sh@56 -- # : 00:05:33.441 07:51:39 -- setup/devices.sh@59 -- # local pci status 00:05:33.441 07:51:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.441 07:51:39 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:33.441 07:51:39 -- setup/devices.sh@47 -- # setup output config 00:05:33.441 07:51:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.441 07:51:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:33.441 07:51:39 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.441 07:51:39 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:33.441 07:51:39 -- setup/devices.sh@63 -- # found=1 00:05:33.441 07:51:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.441 07:51:39 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.441 07:51:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.700 07:51:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.700 07:51:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.959 07:51:39 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:33.959 07:51:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.959 07:51:39 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.959 07:51:39 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:05:33.959 07:51:39 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.959 07:51:39 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:05:33.959 07:51:39 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:05:33.959 07:51:39 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:33.959 07:51:39 -- setup/devices.sh@184 -- # verify 0000:00:06.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:33.959 07:51:39 -- setup/devices.sh@48 -- # local dev=0000:00:06.0 00:05:33.959 07:51:39 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:33.959 07:51:39 -- setup/devices.sh@50 -- # local mount_point= 00:05:33.959 07:51:39 -- setup/devices.sh@51 -- # local test_file= 00:05:33.959 07:51:39 -- setup/devices.sh@53 -- # local found=0 00:05:33.959 07:51:39 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:33.959 07:51:39 -- setup/devices.sh@59 -- # local pci status 00:05:33.959 07:51:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.959 07:51:39 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:06.0 00:05:33.959 07:51:39 -- setup/devices.sh@47 -- # setup output config 00:05:33.959 07:51:39 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:33.959 07:51:39 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:05:34.218 07:51:39 -- setup/devices.sh@62 -- # [[ 0000:00:06.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.218 07:51:39 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:34.218 07:51:39 -- setup/devices.sh@63 -- # found=1 00:05:34.218 07:51:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.218 07:51:39 -- setup/devices.sh@62 -- # [[ 0000:00:07.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.218 07:51:39 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.477 07:51:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.477 07:51:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.477 07:51:40 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\0\6\.\0 ]] 00:05:34.477 07:51:40 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:34.736 07:51:40 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:34.736 07:51:40 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:34.736 07:51:40 -- setup/devices.sh@68 -- # return 0 00:05:34.736 07:51:40 -- setup/devices.sh@187 -- # cleanup_dm 00:05:34.736 07:51:40 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:34.736 07:51:40 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:34.736 07:51:40 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:34.736 07:51:40 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:34.736 07:51:40 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:34.736 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:34.736 07:51:40 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:34.736 07:51:40 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:34.736 00:05:34.736 real 0m4.526s 00:05:34.736 user 0m0.653s 00:05:34.736 sys 0m0.799s 00:05:34.736 07:51:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.736 07:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:34.736 ************************************ 00:05:34.736 END TEST dm_mount 00:05:34.736 ************************************ 00:05:34.736 07:51:40 -- setup/devices.sh@1 -- # cleanup 00:05:34.736 07:51:40 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:34.736 07:51:40 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:05:34.736 07:51:40 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:34.736 07:51:40 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:34.736 07:51:40 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:34.736 07:51:40 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:34.996 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:05:34.996 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:05:34.996 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:34.996 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:34.996 07:51:40 -- setup/devices.sh@12 -- # cleanup_dm 00:05:34.996 07:51:40 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:05:34.996 07:51:40 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:34.996 07:51:40 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:34.996 07:51:40 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:34.996 07:51:40 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:34.996 07:51:40 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:34.996 00:05:34.996 real 0m10.491s 00:05:34.996 user 0m2.338s 00:05:34.996 sys 0m2.496s 00:05:34.996 07:51:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.996 07:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:34.996 ************************************ 00:05:34.996 END TEST devices 00:05:34.996 ************************************ 00:05:34.996 00:05:34.996 real 0m21.836s 00:05:34.996 user 0m7.340s 00:05:34.996 sys 0m8.935s 00:05:34.996 07:51:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.996 07:51:40 -- common/autotest_common.sh@10 -- # set +x 00:05:34.996 ************************************ 00:05:34.996 END TEST setup.sh 00:05:34.996 ************************************ 00:05:34.996 07:51:40 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:35.254 Hugepages 00:05:35.254 node hugesize free / total 00:05:35.254 node0 1048576kB 0 / 0 00:05:35.254 node0 2048kB 2048 / 2048 00:05:35.254 00:05:35.254 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:35.254 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:35.254 NVMe 0000:00:06.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:35.513 NVMe 0000:00:07.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:35.513 07:51:41 -- spdk/autotest.sh@141 -- # uname -s 00:05:35.513 07:51:41 -- spdk/autotest.sh@141 -- # [[ Linux == Linux ]] 00:05:35.513 07:51:41 -- spdk/autotest.sh@143 -- # nvme_namespace_revert 00:05:35.513 07:51:41 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:36.080 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:36.080 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.338 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:36.338 07:51:41 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:37.272 07:51:42 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:37.272 07:51:42 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:37.272 07:51:42 -- common/autotest_common.sh@1519 -- # bdfs=($(get_nvme_bdfs)) 00:05:37.272 07:51:43 -- common/autotest_common.sh@1519 -- # get_nvme_bdfs 00:05:37.272 07:51:43 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:37.272 07:51:43 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:37.272 07:51:43 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:37.272 07:51:43 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:37.272 07:51:43 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:37.272 07:51:43 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:37.272 07:51:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:37.272 07:51:43 -- common/autotest_common.sh@1521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:37.837 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.837 Waiting for block devices as requested 00:05:37.837 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.837 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.837 07:51:43 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:37.837 07:51:43 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:06.0 00:05:37.837 07:51:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:37.837 07:51:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:06.0/nvme/nvme 00:05:37.837 07:51:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:37.837 07:51:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 ]] 00:05:37.837 07:51:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:06.0/nvme/nvme0 00:05:37.837 07:51:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:37.837 07:51:43 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme0 00:05:37.837 07:51:43 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme0 ]] 00:05:37.837 07:51:43 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme0 00:05:37.837 07:51:43 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:37.837 07:51:43 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:37.837 07:51:43 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:37.837 07:51:43 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:37.837 07:51:43 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:38.095 07:51:43 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme0 00:05:38.095 07:51:43 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:38.095 07:51:43 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:38.095 07:51:43 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:38.095 07:51:43 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:38.095 07:51:43 -- common/autotest_common.sh@1542 -- # continue 00:05:38.095 07:51:43 -- common/autotest_common.sh@1523 -- # for bdf in "${bdfs[@]}" 00:05:38.095 07:51:43 -- common/autotest_common.sh@1524 -- # get_nvme_ctrlr_from_bdf 0000:00:07.0 00:05:38.095 07:51:43 -- common/autotest_common.sh@1487 -- # grep 0000:00:07.0/nvme/nvme 00:05:38.095 07:51:43 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:38.095 07:51:43 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:38.095 07:51:43 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 ]] 00:05:38.095 07:51:43 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:07.0/nvme/nvme1 00:05:38.095 07:51:43 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:38.095 07:51:43 -- common/autotest_common.sh@1524 -- # nvme_ctrlr=/dev/nvme1 00:05:38.095 07:51:43 -- common/autotest_common.sh@1525 -- # [[ -z /dev/nvme1 ]] 00:05:38.095 07:51:43 -- common/autotest_common.sh@1530 -- # nvme id-ctrl /dev/nvme1 00:05:38.095 07:51:43 -- common/autotest_common.sh@1530 -- # grep oacs 00:05:38.095 07:51:43 -- common/autotest_common.sh@1530 -- # cut -d: -f2 00:05:38.095 07:51:43 -- common/autotest_common.sh@1530 -- # oacs=' 0x12a' 00:05:38.095 07:51:43 -- common/autotest_common.sh@1531 -- # oacs_ns_manage=8 00:05:38.095 07:51:43 -- common/autotest_common.sh@1533 -- # [[ 8 -ne 0 ]] 00:05:38.095 07:51:43 -- common/autotest_common.sh@1539 -- # nvme id-ctrl /dev/nvme1 00:05:38.095 07:51:43 -- common/autotest_common.sh@1539 -- # grep unvmcap 00:05:38.095 07:51:43 -- common/autotest_common.sh@1539 -- # cut -d: -f2 00:05:38.095 07:51:43 -- common/autotest_common.sh@1539 -- # unvmcap=' 0' 00:05:38.095 07:51:43 -- common/autotest_common.sh@1540 -- # [[ 0 -eq 0 ]] 00:05:38.095 07:51:43 -- common/autotest_common.sh@1542 -- # continue 00:05:38.095 07:51:43 -- spdk/autotest.sh@146 -- # timing_exit pre_cleanup 00:05:38.095 07:51:43 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:38.095 07:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:38.095 07:51:43 -- spdk/autotest.sh@149 -- # timing_enter afterboot 00:05:38.095 07:51:43 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:38.095 07:51:43 -- common/autotest_common.sh@10 -- # set +x 00:05:38.095 07:51:43 -- spdk/autotest.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:38.662 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.662 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.920 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.920 07:51:44 -- spdk/autotest.sh@151 -- # timing_exit afterboot 00:05:38.920 07:51:44 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:38.920 07:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:38.921 07:51:44 -- spdk/autotest.sh@155 -- # opal_revert_cleanup 00:05:38.921 07:51:44 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:05:38.921 07:51:44 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:05:38.921 07:51:44 -- common/autotest_common.sh@1562 -- # bdfs=() 00:05:38.921 07:51:44 -- common/autotest_common.sh@1562 -- # local bdfs 00:05:38.921 07:51:44 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:38.921 07:51:44 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:38.921 07:51:44 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:38.921 07:51:44 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:38.921 07:51:44 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:38.921 07:51:44 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:38.921 07:51:44 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:38.921 07:51:44 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:05:38.921 07:51:44 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:38.921 07:51:44 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:06.0/device 00:05:38.921 07:51:44 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:38.921 07:51:44 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:38.921 07:51:44 -- common/autotest_common.sh@1564 -- # for bdf in $(get_nvme_bdfs) 00:05:38.921 07:51:44 -- common/autotest_common.sh@1565 -- # cat /sys/bus/pci/devices/0000:00:07.0/device 00:05:38.921 07:51:44 -- common/autotest_common.sh@1565 -- # device=0x0010 00:05:38.921 07:51:44 -- common/autotest_common.sh@1566 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:38.921 07:51:44 -- common/autotest_common.sh@1571 -- # printf '%s\n' 00:05:38.921 07:51:44 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:05:38.921 07:51:44 -- common/autotest_common.sh@1578 -- # return 0 00:05:38.921 07:51:44 -- spdk/autotest.sh@161 -- # '[' 0 -eq 1 ']' 00:05:38.921 07:51:44 -- spdk/autotest.sh@165 -- # '[' 1 -eq 1 ']' 00:05:38.921 07:51:44 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:38.921 07:51:44 -- spdk/autotest.sh@166 -- # [[ 0 -eq 1 ]] 00:05:38.921 07:51:44 -- spdk/autotest.sh@173 -- # timing_enter lib 00:05:38.921 07:51:44 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:38.921 07:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:38.921 07:51:44 -- spdk/autotest.sh@175 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:38.921 07:51:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:38.921 07:51:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:38.921 07:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:38.921 ************************************ 00:05:38.921 START TEST env 00:05:38.921 ************************************ 00:05:38.921 07:51:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:39.189 * Looking for test storage... 00:05:39.189 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:39.189 07:51:44 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:39.189 07:51:44 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.189 07:51:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.189 07:51:44 -- common/autotest_common.sh@10 -- # set +x 00:05:39.189 ************************************ 00:05:39.189 START TEST env_memory 00:05:39.189 ************************************ 00:05:39.189 07:51:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:39.189 00:05:39.189 00:05:39.189 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.189 http://cunit.sourceforge.net/ 00:05:39.189 00:05:39.189 00:05:39.189 Suite: memory 00:05:39.189 Test: alloc and free memory map ...[2024-07-13 07:51:44.851050] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:39.189 passed 00:05:39.189 Test: mem map translation ...[2024-07-13 07:51:44.881840] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:39.189 [2024-07-13 07:51:44.881878] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:39.189 [2024-07-13 07:51:44.881934] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:39.189 [2024-07-13 07:51:44.881945] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:39.189 passed 00:05:39.189 Test: mem map registration ...[2024-07-13 07:51:44.947378] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:39.189 [2024-07-13 07:51:44.947421] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:39.189 passed 00:05:39.465 Test: mem map adjacent registrations ...passed 00:05:39.465 00:05:39.465 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.465 suites 1 1 n/a 0 0 00:05:39.465 tests 4 4 4 0 0 00:05:39.465 asserts 152 152 152 0 n/a 00:05:39.465 00:05:39.465 Elapsed time = 0.215 seconds 00:05:39.465 00:05:39.465 real 0m0.230s 00:05:39.465 user 0m0.216s 00:05:39.465 sys 0m0.011s 00:05:39.465 07:51:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.465 07:51:45 -- common/autotest_common.sh@10 -- # set +x 00:05:39.465 ************************************ 00:05:39.465 END TEST env_memory 00:05:39.465 ************************************ 00:05:39.465 07:51:45 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:39.465 07:51:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:39.465 07:51:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:39.465 07:51:45 -- common/autotest_common.sh@10 -- # set +x 00:05:39.465 ************************************ 00:05:39.465 START TEST env_vtophys 00:05:39.465 ************************************ 00:05:39.465 07:51:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:39.465 EAL: lib.eal log level changed from notice to debug 00:05:39.465 EAL: Detected lcore 0 as core 0 on socket 0 00:05:39.465 EAL: Detected lcore 1 as core 0 on socket 0 00:05:39.465 EAL: Detected lcore 2 as core 0 on socket 0 00:05:39.465 EAL: Detected lcore 3 as core 0 on socket 0 00:05:39.465 EAL: Detected lcore 4 as core 0 on socket 0 00:05:39.465 EAL: Detected lcore 5 as core 0 on socket 0 00:05:39.465 EAL: Detected lcore 6 as core 0 on socket 0 00:05:39.465 EAL: Detected lcore 7 as core 0 on socket 0 00:05:39.465 EAL: Detected lcore 8 as core 0 on socket 0 00:05:39.465 EAL: Detected lcore 9 as core 0 on socket 0 00:05:39.465 EAL: Maximum logical cores by configuration: 128 00:05:39.465 EAL: Detected CPU lcores: 10 00:05:39.465 EAL: Detected NUMA nodes: 1 00:05:39.465 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:39.465 EAL: Detected shared linkage of DPDK 00:05:39.465 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:39.465 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:39.465 EAL: Registered [vdev] bus. 00:05:39.465 EAL: bus.vdev log level changed from disabled to notice 00:05:39.465 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:39.465 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:39.465 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:39.465 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:39.465 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:39.465 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:39.465 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:39.465 EAL: open shared lib /home/vagrant/spdk_repo/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:39.465 EAL: No shared files mode enabled, IPC will be disabled 00:05:39.465 EAL: No shared files mode enabled, IPC is disabled 00:05:39.465 EAL: Selected IOVA mode 'PA' 00:05:39.465 EAL: Probing VFIO support... 00:05:39.465 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:39.465 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:39.465 EAL: Ask a virtual area of 0x2e000 bytes 00:05:39.466 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:39.466 EAL: Setting up physically contiguous memory... 00:05:39.466 EAL: Setting maximum number of open files to 524288 00:05:39.466 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:39.466 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:39.466 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.466 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:39.466 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.466 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.466 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:39.466 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:39.466 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.466 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:39.466 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.466 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.466 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:39.466 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:39.466 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.466 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:39.466 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.466 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.466 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:39.466 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:39.466 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.466 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:39.466 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.466 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.466 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:39.466 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:39.466 EAL: Hugepages will be freed exactly as allocated. 00:05:39.466 EAL: No shared files mode enabled, IPC is disabled 00:05:39.466 EAL: No shared files mode enabled, IPC is disabled 00:05:39.466 EAL: TSC frequency is ~2200000 KHz 00:05:39.466 EAL: Main lcore 0 is ready (tid=7fcaa6258a00;cpuset=[0]) 00:05:39.466 EAL: Trying to obtain current memory policy. 00:05:39.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.466 EAL: Restoring previous memory policy: 0 00:05:39.466 EAL: request: mp_malloc_sync 00:05:39.466 EAL: No shared files mode enabled, IPC is disabled 00:05:39.466 EAL: Heap on socket 0 was expanded by 2MB 00:05:39.466 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:39.466 EAL: No shared files mode enabled, IPC is disabled 00:05:39.466 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:39.466 EAL: Mem event callback 'spdk:(nil)' registered 00:05:39.466 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:39.466 00:05:39.466 00:05:39.466 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.466 http://cunit.sourceforge.net/ 00:05:39.466 00:05:39.466 00:05:39.466 Suite: components_suite 00:05:39.466 Test: vtophys_malloc_test ...passed 00:05:39.466 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:39.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.466 EAL: Restoring previous memory policy: 4 00:05:39.466 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.466 EAL: request: mp_malloc_sync 00:05:39.466 EAL: No shared files mode enabled, IPC is disabled 00:05:39.466 EAL: Heap on socket 0 was expanded by 4MB 00:05:39.466 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.466 EAL: request: mp_malloc_sync 00:05:39.466 EAL: No shared files mode enabled, IPC is disabled 00:05:39.466 EAL: Heap on socket 0 was shrunk by 4MB 00:05:39.466 EAL: Trying to obtain current memory policy. 00:05:39.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.466 EAL: Restoring previous memory policy: 4 00:05:39.466 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.466 EAL: request: mp_malloc_sync 00:05:39.466 EAL: No shared files mode enabled, IPC is disabled 00:05:39.466 EAL: Heap on socket 0 was expanded by 6MB 00:05:39.466 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.466 EAL: request: mp_malloc_sync 00:05:39.466 EAL: No shared files mode enabled, IPC is disabled 00:05:39.466 EAL: Heap on socket 0 was shrunk by 6MB 00:05:39.466 EAL: Trying to obtain current memory policy. 00:05:39.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.466 EAL: Restoring previous memory policy: 4 00:05:39.466 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.466 EAL: request: mp_malloc_sync 00:05:39.466 EAL: No shared files mode enabled, IPC is disabled 00:05:39.466 EAL: Heap on socket 0 was expanded by 10MB 00:05:39.466 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.466 EAL: request: mp_malloc_sync 00:05:39.466 EAL: No shared files mode enabled, IPC is disabled 00:05:39.466 EAL: Heap on socket 0 was shrunk by 10MB 00:05:39.466 EAL: Trying to obtain current memory policy. 00:05:39.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.466 EAL: Restoring previous memory policy: 4 00:05:39.466 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.466 EAL: request: mp_malloc_sync 00:05:39.466 EAL: No shared files mode enabled, IPC is disabled 00:05:39.466 EAL: Heap on socket 0 was expanded by 18MB 00:05:39.466 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.466 EAL: request: mp_malloc_sync 00:05:39.466 EAL: No shared files mode enabled, IPC is disabled 00:05:39.466 EAL: Heap on socket 0 was shrunk by 18MB 00:05:39.466 EAL: Trying to obtain current memory policy. 00:05:39.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.466 EAL: Restoring previous memory policy: 4 00:05:39.466 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.466 EAL: request: mp_malloc_sync 00:05:39.466 EAL: No shared files mode enabled, IPC is disabled 00:05:39.466 EAL: Heap on socket 0 was expanded by 34MB 00:05:39.466 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.466 EAL: request: mp_malloc_sync 00:05:39.466 EAL: No shared files mode enabled, IPC is disabled 00:05:39.466 EAL: Heap on socket 0 was shrunk by 34MB 00:05:39.466 EAL: Trying to obtain current memory policy. 00:05:39.466 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.466 EAL: Restoring previous memory policy: 4 00:05:39.466 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.466 EAL: request: mp_malloc_sync 00:05:39.466 EAL: No shared files mode enabled, IPC is disabled 00:05:39.466 EAL: Heap on socket 0 was expanded by 66MB 00:05:39.728 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.728 EAL: request: mp_malloc_sync 00:05:39.728 EAL: No shared files mode enabled, IPC is disabled 00:05:39.728 EAL: Heap on socket 0 was shrunk by 66MB 00:05:39.728 EAL: Trying to obtain current memory policy. 00:05:39.728 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.728 EAL: Restoring previous memory policy: 4 00:05:39.728 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.728 EAL: request: mp_malloc_sync 00:05:39.728 EAL: No shared files mode enabled, IPC is disabled 00:05:39.728 EAL: Heap on socket 0 was expanded by 130MB 00:05:39.728 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.728 EAL: request: mp_malloc_sync 00:05:39.729 EAL: No shared files mode enabled, IPC is disabled 00:05:39.729 EAL: Heap on socket 0 was shrunk by 130MB 00:05:39.729 EAL: Trying to obtain current memory policy. 00:05:39.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.729 EAL: Restoring previous memory policy: 4 00:05:39.729 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.729 EAL: request: mp_malloc_sync 00:05:39.729 EAL: No shared files mode enabled, IPC is disabled 00:05:39.729 EAL: Heap on socket 0 was expanded by 258MB 00:05:39.729 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.729 EAL: request: mp_malloc_sync 00:05:39.729 EAL: No shared files mode enabled, IPC is disabled 00:05:39.729 EAL: Heap on socket 0 was shrunk by 258MB 00:05:39.729 EAL: Trying to obtain current memory policy. 00:05:39.729 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.987 EAL: Restoring previous memory policy: 4 00:05:39.987 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.987 EAL: request: mp_malloc_sync 00:05:39.987 EAL: No shared files mode enabled, IPC is disabled 00:05:39.987 EAL: Heap on socket 0 was expanded by 514MB 00:05:39.987 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.987 EAL: request: mp_malloc_sync 00:05:39.987 EAL: No shared files mode enabled, IPC is disabled 00:05:39.987 EAL: Heap on socket 0 was shrunk by 514MB 00:05:39.987 EAL: Trying to obtain current memory policy. 00:05:39.987 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.246 EAL: Restoring previous memory policy: 4 00:05:40.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.246 EAL: request: mp_malloc_sync 00:05:40.246 EAL: No shared files mode enabled, IPC is disabled 00:05:40.246 EAL: Heap on socket 0 was expanded by 1026MB 00:05:40.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.246 passed 00:05:40.246 00:05:40.246 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.246 suites 1 1 n/a 0 0 00:05:40.246 tests 2 2 2 0 0 00:05:40.246 asserts 5274 5274 5274 0 n/a 00:05:40.246 00:05:40.246 Elapsed time = 0.756 seconds 00:05:40.246 EAL: request: mp_malloc_sync 00:05:40.246 EAL: No shared files mode enabled, IPC is disabled 00:05:40.246 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:40.246 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.246 EAL: request: mp_malloc_sync 00:05:40.246 EAL: No shared files mode enabled, IPC is disabled 00:05:40.246 EAL: Heap on socket 0 was shrunk by 2MB 00:05:40.246 EAL: No shared files mode enabled, IPC is disabled 00:05:40.246 EAL: No shared files mode enabled, IPC is disabled 00:05:40.246 EAL: No shared files mode enabled, IPC is disabled 00:05:40.246 00:05:40.246 real 0m0.948s 00:05:40.246 user 0m0.490s 00:05:40.246 sys 0m0.328s 00:05:40.246 07:51:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.246 07:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:40.246 ************************************ 00:05:40.246 END TEST env_vtophys 00:05:40.246 ************************************ 00:05:40.505 07:51:46 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:40.505 07:51:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.505 07:51:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.505 07:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:40.505 ************************************ 00:05:40.505 START TEST env_pci 00:05:40.505 ************************************ 00:05:40.505 07:51:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:40.505 00:05:40.505 00:05:40.505 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.505 http://cunit.sourceforge.net/ 00:05:40.505 00:05:40.505 00:05:40.505 Suite: pci 00:05:40.505 Test: pci_hook ...[2024-07-13 07:51:46.090999] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 65340 has claimed it 00:05:40.505 passed 00:05:40.505 00:05:40.505 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.505 suites 1 1 n/a 0 0 00:05:40.505 tests 1 1 1 0 0 00:05:40.505 asserts 25 25 25 0 n/a 00:05:40.505 00:05:40.505 Elapsed time = 0.002 seconds 00:05:40.505 EAL: Cannot find device (10000:00:01.0) 00:05:40.505 EAL: Failed to attach device on primary process 00:05:40.505 00:05:40.505 real 0m0.017s 00:05:40.505 user 0m0.008s 00:05:40.505 sys 0m0.008s 00:05:40.505 07:51:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.505 07:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:40.505 ************************************ 00:05:40.505 END TEST env_pci 00:05:40.505 ************************************ 00:05:40.505 07:51:46 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:40.505 07:51:46 -- env/env.sh@15 -- # uname 00:05:40.505 07:51:46 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:40.505 07:51:46 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:40.505 07:51:46 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.505 07:51:46 -- common/autotest_common.sh@1077 -- # '[' 5 -le 1 ']' 00:05:40.505 07:51:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.505 07:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:40.505 ************************************ 00:05:40.505 START TEST env_dpdk_post_init 00:05:40.505 ************************************ 00:05:40.505 07:51:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:40.505 EAL: Detected CPU lcores: 10 00:05:40.505 EAL: Detected NUMA nodes: 1 00:05:40.505 EAL: Detected shared linkage of DPDK 00:05:40.505 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.505 EAL: Selected IOVA mode 'PA' 00:05:40.505 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.505 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:06.0 (socket -1) 00:05:40.505 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:07.0 (socket -1) 00:05:40.765 Starting DPDK initialization... 00:05:40.765 Starting SPDK post initialization... 00:05:40.765 SPDK NVMe probe 00:05:40.765 Attaching to 0000:00:06.0 00:05:40.765 Attaching to 0000:00:07.0 00:05:40.765 Attached to 0000:00:06.0 00:05:40.765 Attached to 0000:00:07.0 00:05:40.765 Cleaning up... 00:05:40.765 00:05:40.765 real 0m0.173s 00:05:40.765 user 0m0.036s 00:05:40.765 sys 0m0.038s 00:05:40.765 07:51:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.765 07:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:40.765 ************************************ 00:05:40.765 END TEST env_dpdk_post_init 00:05:40.765 ************************************ 00:05:40.765 07:51:46 -- env/env.sh@26 -- # uname 00:05:40.765 07:51:46 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:40.765 07:51:46 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.765 07:51:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:40.765 07:51:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:40.765 07:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:40.765 ************************************ 00:05:40.765 START TEST env_mem_callbacks 00:05:40.765 ************************************ 00:05:40.765 07:51:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:40.765 EAL: Detected CPU lcores: 10 00:05:40.765 EAL: Detected NUMA nodes: 1 00:05:40.765 EAL: Detected shared linkage of DPDK 00:05:40.765 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:40.765 EAL: Selected IOVA mode 'PA' 00:05:40.765 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:40.765 00:05:40.765 00:05:40.765 CUnit - A unit testing framework for C - Version 2.1-3 00:05:40.765 http://cunit.sourceforge.net/ 00:05:40.765 00:05:40.765 00:05:40.765 Suite: memory 00:05:40.765 Test: test ... 00:05:40.765 register 0x200000200000 2097152 00:05:40.765 malloc 3145728 00:05:40.765 register 0x200000400000 4194304 00:05:40.765 buf 0x200000500000 len 3145728 PASSED 00:05:40.765 malloc 64 00:05:40.765 buf 0x2000004fff40 len 64 PASSED 00:05:40.765 malloc 4194304 00:05:40.765 register 0x200000800000 6291456 00:05:40.765 buf 0x200000a00000 len 4194304 PASSED 00:05:40.765 free 0x200000500000 3145728 00:05:40.765 free 0x2000004fff40 64 00:05:40.766 unregister 0x200000400000 4194304 PASSED 00:05:40.766 free 0x200000a00000 4194304 00:05:40.766 unregister 0x200000800000 6291456 PASSED 00:05:40.766 malloc 8388608 00:05:40.766 register 0x200000400000 10485760 00:05:40.766 buf 0x200000600000 len 8388608 PASSED 00:05:40.766 free 0x200000600000 8388608 00:05:40.766 unregister 0x200000400000 10485760 PASSED 00:05:40.766 passed 00:05:40.766 00:05:40.766 Run Summary: Type Total Ran Passed Failed Inactive 00:05:40.766 suites 1 1 n/a 0 0 00:05:40.766 tests 1 1 1 0 0 00:05:40.766 asserts 15 15 15 0 n/a 00:05:40.766 00:05:40.766 Elapsed time = 0.009 seconds 00:05:40.766 00:05:40.766 real 0m0.140s 00:05:40.766 user 0m0.017s 00:05:40.766 sys 0m0.022s 00:05:40.766 07:51:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.766 07:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:40.766 ************************************ 00:05:40.766 END TEST env_mem_callbacks 00:05:40.766 ************************************ 00:05:40.766 00:05:40.766 real 0m1.859s 00:05:40.766 user 0m0.888s 00:05:40.766 sys 0m0.618s 00:05:40.766 07:51:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.766 07:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:40.766 ************************************ 00:05:40.766 END TEST env 00:05:40.766 ************************************ 00:05:41.025 07:51:46 -- spdk/autotest.sh@176 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:41.025 07:51:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:41.025 07:51:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:41.025 07:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:41.025 ************************************ 00:05:41.025 START TEST rpc 00:05:41.025 ************************************ 00:05:41.025 07:51:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:41.025 * Looking for test storage... 00:05:41.025 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:41.025 07:51:46 -- rpc/rpc.sh@65 -- # spdk_pid=65449 00:05:41.025 07:51:46 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:41.025 07:51:46 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.025 07:51:46 -- rpc/rpc.sh@67 -- # waitforlisten 65449 00:05:41.025 07:51:46 -- common/autotest_common.sh@819 -- # '[' -z 65449 ']' 00:05:41.025 07:51:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.025 07:51:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:41.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.025 07:51:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.025 07:51:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:41.025 07:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:41.025 [2024-07-13 07:51:46.780153] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:41.025 [2024-07-13 07:51:46.780265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65449 ] 00:05:41.284 [2024-07-13 07:51:46.919183] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.284 [2024-07-13 07:51:46.955454] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:41.284 [2024-07-13 07:51:46.955619] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:41.284 [2024-07-13 07:51:46.955636] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 65449' to capture a snapshot of events at runtime. 00:05:41.284 [2024-07-13 07:51:46.955644] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid65449 for offline analysis/debug. 00:05:41.284 [2024-07-13 07:51:46.955677] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.218 07:51:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:42.218 07:51:47 -- common/autotest_common.sh@852 -- # return 0 00:05:42.218 07:51:47 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:42.218 07:51:47 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:42.218 07:51:47 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:42.218 07:51:47 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:42.218 07:51:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.218 07:51:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.218 07:51:47 -- common/autotest_common.sh@10 -- # set +x 00:05:42.218 ************************************ 00:05:42.218 START TEST rpc_integrity 00:05:42.218 ************************************ 00:05:42.218 07:51:47 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:42.218 07:51:47 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.218 07:51:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.218 07:51:47 -- common/autotest_common.sh@10 -- # set +x 00:05:42.218 07:51:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.218 07:51:47 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.218 07:51:47 -- rpc/rpc.sh@13 -- # jq length 00:05:42.218 07:51:47 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.218 07:51:47 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.218 07:51:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.218 07:51:47 -- common/autotest_common.sh@10 -- # set +x 00:05:42.218 07:51:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.218 07:51:47 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:42.218 07:51:47 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.218 07:51:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.218 07:51:47 -- common/autotest_common.sh@10 -- # set +x 00:05:42.218 07:51:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.218 07:51:47 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.218 { 00:05:42.218 "name": "Malloc0", 00:05:42.218 "aliases": [ 00:05:42.218 "ffa043c2-50a9-4024-a2a9-6466419e2a3d" 00:05:42.218 ], 00:05:42.218 "product_name": "Malloc disk", 00:05:42.218 "block_size": 512, 00:05:42.218 "num_blocks": 16384, 00:05:42.218 "uuid": "ffa043c2-50a9-4024-a2a9-6466419e2a3d", 00:05:42.218 "assigned_rate_limits": { 00:05:42.218 "rw_ios_per_sec": 0, 00:05:42.218 "rw_mbytes_per_sec": 0, 00:05:42.218 "r_mbytes_per_sec": 0, 00:05:42.218 "w_mbytes_per_sec": 0 00:05:42.218 }, 00:05:42.218 "claimed": false, 00:05:42.218 "zoned": false, 00:05:42.218 "supported_io_types": { 00:05:42.218 "read": true, 00:05:42.218 "write": true, 00:05:42.218 "unmap": true, 00:05:42.218 "write_zeroes": true, 00:05:42.218 "flush": true, 00:05:42.218 "reset": true, 00:05:42.218 "compare": false, 00:05:42.218 "compare_and_write": false, 00:05:42.218 "abort": true, 00:05:42.218 "nvme_admin": false, 00:05:42.218 "nvme_io": false 00:05:42.218 }, 00:05:42.218 "memory_domains": [ 00:05:42.218 { 00:05:42.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.218 "dma_device_type": 2 00:05:42.218 } 00:05:42.218 ], 00:05:42.218 "driver_specific": {} 00:05:42.218 } 00:05:42.218 ]' 00:05:42.218 07:51:47 -- rpc/rpc.sh@17 -- # jq length 00:05:42.218 07:51:47 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.218 07:51:47 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:42.218 07:51:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.218 07:51:47 -- common/autotest_common.sh@10 -- # set +x 00:05:42.218 [2024-07-13 07:51:47.912205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:42.218 [2024-07-13 07:51:47.912263] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:42.218 [2024-07-13 07:51:47.912279] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x79ca10 00:05:42.218 [2024-07-13 07:51:47.912287] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:42.218 [2024-07-13 07:51:47.913874] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:42.218 [2024-07-13 07:51:47.913913] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:42.218 Passthru0 00:05:42.218 07:51:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.218 07:51:47 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:42.218 07:51:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.218 07:51:47 -- common/autotest_common.sh@10 -- # set +x 00:05:42.218 07:51:47 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.218 07:51:47 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:42.218 { 00:05:42.218 "name": "Malloc0", 00:05:42.218 "aliases": [ 00:05:42.218 "ffa043c2-50a9-4024-a2a9-6466419e2a3d" 00:05:42.218 ], 00:05:42.218 "product_name": "Malloc disk", 00:05:42.218 "block_size": 512, 00:05:42.218 "num_blocks": 16384, 00:05:42.218 "uuid": "ffa043c2-50a9-4024-a2a9-6466419e2a3d", 00:05:42.218 "assigned_rate_limits": { 00:05:42.218 "rw_ios_per_sec": 0, 00:05:42.218 "rw_mbytes_per_sec": 0, 00:05:42.218 "r_mbytes_per_sec": 0, 00:05:42.219 "w_mbytes_per_sec": 0 00:05:42.219 }, 00:05:42.219 "claimed": true, 00:05:42.219 "claim_type": "exclusive_write", 00:05:42.219 "zoned": false, 00:05:42.219 "supported_io_types": { 00:05:42.219 "read": true, 00:05:42.219 "write": true, 00:05:42.219 "unmap": true, 00:05:42.219 "write_zeroes": true, 00:05:42.219 "flush": true, 00:05:42.219 "reset": true, 00:05:42.219 "compare": false, 00:05:42.219 "compare_and_write": false, 00:05:42.219 "abort": true, 00:05:42.219 "nvme_admin": false, 00:05:42.219 "nvme_io": false 00:05:42.219 }, 00:05:42.219 "memory_domains": [ 00:05:42.219 { 00:05:42.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.219 "dma_device_type": 2 00:05:42.219 } 00:05:42.219 ], 00:05:42.219 "driver_specific": {} 00:05:42.219 }, 00:05:42.219 { 00:05:42.219 "name": "Passthru0", 00:05:42.219 "aliases": [ 00:05:42.219 "b891fa65-b730-5c0a-ab03-9e7675ef4f05" 00:05:42.219 ], 00:05:42.219 "product_name": "passthru", 00:05:42.219 "block_size": 512, 00:05:42.219 "num_blocks": 16384, 00:05:42.219 "uuid": "b891fa65-b730-5c0a-ab03-9e7675ef4f05", 00:05:42.219 "assigned_rate_limits": { 00:05:42.219 "rw_ios_per_sec": 0, 00:05:42.219 "rw_mbytes_per_sec": 0, 00:05:42.219 "r_mbytes_per_sec": 0, 00:05:42.219 "w_mbytes_per_sec": 0 00:05:42.219 }, 00:05:42.219 "claimed": false, 00:05:42.219 "zoned": false, 00:05:42.219 "supported_io_types": { 00:05:42.219 "read": true, 00:05:42.219 "write": true, 00:05:42.219 "unmap": true, 00:05:42.219 "write_zeroes": true, 00:05:42.219 "flush": true, 00:05:42.219 "reset": true, 00:05:42.219 "compare": false, 00:05:42.219 "compare_and_write": false, 00:05:42.219 "abort": true, 00:05:42.219 "nvme_admin": false, 00:05:42.219 "nvme_io": false 00:05:42.219 }, 00:05:42.219 "memory_domains": [ 00:05:42.219 { 00:05:42.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.219 "dma_device_type": 2 00:05:42.219 } 00:05:42.219 ], 00:05:42.219 "driver_specific": { 00:05:42.219 "passthru": { 00:05:42.219 "name": "Passthru0", 00:05:42.219 "base_bdev_name": "Malloc0" 00:05:42.219 } 00:05:42.219 } 00:05:42.219 } 00:05:42.219 ]' 00:05:42.219 07:51:47 -- rpc/rpc.sh@21 -- # jq length 00:05:42.219 07:51:47 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:42.219 07:51:47 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:42.219 07:51:47 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.219 07:51:47 -- common/autotest_common.sh@10 -- # set +x 00:05:42.219 07:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.219 07:51:48 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:42.219 07:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.219 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.219 07:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.219 07:51:48 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:42.219 07:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.219 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.219 07:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.219 07:51:48 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:42.219 07:51:48 -- rpc/rpc.sh@26 -- # jq length 00:05:42.477 ************************************ 00:05:42.477 END TEST rpc_integrity 00:05:42.477 ************************************ 00:05:42.477 07:51:48 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:42.477 00:05:42.477 real 0m0.325s 00:05:42.477 user 0m0.223s 00:05:42.477 sys 0m0.034s 00:05:42.477 07:51:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.477 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.477 07:51:48 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:42.477 07:51:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.477 07:51:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.477 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.477 ************************************ 00:05:42.477 START TEST rpc_plugins 00:05:42.477 ************************************ 00:05:42.477 07:51:48 -- common/autotest_common.sh@1104 -- # rpc_plugins 00:05:42.477 07:51:48 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:42.477 07:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.477 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.477 07:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.477 07:51:48 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:42.477 07:51:48 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:42.477 07:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.477 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.477 07:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.477 07:51:48 -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:42.477 { 00:05:42.477 "name": "Malloc1", 00:05:42.477 "aliases": [ 00:05:42.477 "e44298d5-eee8-4d1f-a9fb-b8374874d372" 00:05:42.477 ], 00:05:42.477 "product_name": "Malloc disk", 00:05:42.477 "block_size": 4096, 00:05:42.477 "num_blocks": 256, 00:05:42.477 "uuid": "e44298d5-eee8-4d1f-a9fb-b8374874d372", 00:05:42.477 "assigned_rate_limits": { 00:05:42.477 "rw_ios_per_sec": 0, 00:05:42.477 "rw_mbytes_per_sec": 0, 00:05:42.477 "r_mbytes_per_sec": 0, 00:05:42.477 "w_mbytes_per_sec": 0 00:05:42.477 }, 00:05:42.477 "claimed": false, 00:05:42.477 "zoned": false, 00:05:42.477 "supported_io_types": { 00:05:42.477 "read": true, 00:05:42.477 "write": true, 00:05:42.477 "unmap": true, 00:05:42.477 "write_zeroes": true, 00:05:42.477 "flush": true, 00:05:42.477 "reset": true, 00:05:42.477 "compare": false, 00:05:42.477 "compare_and_write": false, 00:05:42.477 "abort": true, 00:05:42.478 "nvme_admin": false, 00:05:42.478 "nvme_io": false 00:05:42.478 }, 00:05:42.478 "memory_domains": [ 00:05:42.478 { 00:05:42.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.478 "dma_device_type": 2 00:05:42.478 } 00:05:42.478 ], 00:05:42.478 "driver_specific": {} 00:05:42.478 } 00:05:42.478 ]' 00:05:42.478 07:51:48 -- rpc/rpc.sh@32 -- # jq length 00:05:42.478 07:51:48 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:42.478 07:51:48 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:42.478 07:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.478 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.478 07:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.478 07:51:48 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:42.478 07:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.478 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.478 07:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.478 07:51:48 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:42.478 07:51:48 -- rpc/rpc.sh@36 -- # jq length 00:05:42.737 ************************************ 00:05:42.737 END TEST rpc_plugins 00:05:42.737 ************************************ 00:05:42.737 07:51:48 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:42.737 00:05:42.737 real 0m0.157s 00:05:42.737 user 0m0.099s 00:05:42.737 sys 0m0.021s 00:05:42.737 07:51:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.737 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.737 07:51:48 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:42.737 07:51:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.737 07:51:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.737 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.737 ************************************ 00:05:42.737 START TEST rpc_trace_cmd_test 00:05:42.737 ************************************ 00:05:42.737 07:51:48 -- common/autotest_common.sh@1104 -- # rpc_trace_cmd_test 00:05:42.737 07:51:48 -- rpc/rpc.sh@40 -- # local info 00:05:42.737 07:51:48 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:42.737 07:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.737 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.737 07:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.737 07:51:48 -- rpc/rpc.sh@42 -- # info='{ 00:05:42.737 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid65449", 00:05:42.737 "tpoint_group_mask": "0x8", 00:05:42.737 "iscsi_conn": { 00:05:42.737 "mask": "0x2", 00:05:42.737 "tpoint_mask": "0x0" 00:05:42.737 }, 00:05:42.737 "scsi": { 00:05:42.737 "mask": "0x4", 00:05:42.737 "tpoint_mask": "0x0" 00:05:42.737 }, 00:05:42.737 "bdev": { 00:05:42.737 "mask": "0x8", 00:05:42.737 "tpoint_mask": "0xffffffffffffffff" 00:05:42.737 }, 00:05:42.737 "nvmf_rdma": { 00:05:42.737 "mask": "0x10", 00:05:42.737 "tpoint_mask": "0x0" 00:05:42.737 }, 00:05:42.737 "nvmf_tcp": { 00:05:42.737 "mask": "0x20", 00:05:42.737 "tpoint_mask": "0x0" 00:05:42.737 }, 00:05:42.737 "ftl": { 00:05:42.737 "mask": "0x40", 00:05:42.737 "tpoint_mask": "0x0" 00:05:42.737 }, 00:05:42.737 "blobfs": { 00:05:42.737 "mask": "0x80", 00:05:42.737 "tpoint_mask": "0x0" 00:05:42.737 }, 00:05:42.737 "dsa": { 00:05:42.737 "mask": "0x200", 00:05:42.737 "tpoint_mask": "0x0" 00:05:42.737 }, 00:05:42.737 "thread": { 00:05:42.737 "mask": "0x400", 00:05:42.737 "tpoint_mask": "0x0" 00:05:42.737 }, 00:05:42.737 "nvme_pcie": { 00:05:42.737 "mask": "0x800", 00:05:42.737 "tpoint_mask": "0x0" 00:05:42.737 }, 00:05:42.737 "iaa": { 00:05:42.737 "mask": "0x1000", 00:05:42.737 "tpoint_mask": "0x0" 00:05:42.737 }, 00:05:42.737 "nvme_tcp": { 00:05:42.737 "mask": "0x2000", 00:05:42.737 "tpoint_mask": "0x0" 00:05:42.737 }, 00:05:42.737 "bdev_nvme": { 00:05:42.737 "mask": "0x4000", 00:05:42.737 "tpoint_mask": "0x0" 00:05:42.737 } 00:05:42.737 }' 00:05:42.737 07:51:48 -- rpc/rpc.sh@43 -- # jq length 00:05:42.737 07:51:48 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:05:42.737 07:51:48 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:42.737 07:51:48 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:42.737 07:51:48 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:42.737 07:51:48 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:42.737 07:51:48 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:42.996 07:51:48 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:42.996 07:51:48 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:42.996 ************************************ 00:05:42.996 END TEST rpc_trace_cmd_test 00:05:42.996 ************************************ 00:05:42.996 07:51:48 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:42.996 00:05:42.996 real 0m0.267s 00:05:42.996 user 0m0.232s 00:05:42.996 sys 0m0.027s 00:05:42.996 07:51:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.996 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.996 07:51:48 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:42.996 07:51:48 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:42.996 07:51:48 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:42.996 07:51:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:42.996 07:51:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:42.996 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.996 ************************************ 00:05:42.996 START TEST rpc_daemon_integrity 00:05:42.996 ************************************ 00:05:42.996 07:51:48 -- common/autotest_common.sh@1104 -- # rpc_integrity 00:05:42.996 07:51:48 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:42.996 07:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.996 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.996 07:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.996 07:51:48 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:42.996 07:51:48 -- rpc/rpc.sh@13 -- # jq length 00:05:42.996 07:51:48 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:42.996 07:51:48 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:42.997 07:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.997 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.997 07:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.997 07:51:48 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:42.997 07:51:48 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:42.997 07:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.997 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:42.997 07:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:42.997 07:51:48 -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:42.997 { 00:05:42.997 "name": "Malloc2", 00:05:42.997 "aliases": [ 00:05:42.997 "39105e44-9c0f-4782-ae78-094f4e89482c" 00:05:42.997 ], 00:05:42.997 "product_name": "Malloc disk", 00:05:42.997 "block_size": 512, 00:05:42.997 "num_blocks": 16384, 00:05:42.997 "uuid": "39105e44-9c0f-4782-ae78-094f4e89482c", 00:05:42.997 "assigned_rate_limits": { 00:05:42.997 "rw_ios_per_sec": 0, 00:05:42.997 "rw_mbytes_per_sec": 0, 00:05:42.997 "r_mbytes_per_sec": 0, 00:05:42.997 "w_mbytes_per_sec": 0 00:05:42.997 }, 00:05:42.997 "claimed": false, 00:05:42.997 "zoned": false, 00:05:42.997 "supported_io_types": { 00:05:42.997 "read": true, 00:05:42.997 "write": true, 00:05:42.997 "unmap": true, 00:05:42.997 "write_zeroes": true, 00:05:42.997 "flush": true, 00:05:42.997 "reset": true, 00:05:42.997 "compare": false, 00:05:42.997 "compare_and_write": false, 00:05:42.997 "abort": true, 00:05:42.997 "nvme_admin": false, 00:05:42.997 "nvme_io": false 00:05:42.997 }, 00:05:42.997 "memory_domains": [ 00:05:42.997 { 00:05:42.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:42.997 "dma_device_type": 2 00:05:42.997 } 00:05:42.997 ], 00:05:42.997 "driver_specific": {} 00:05:42.997 } 00:05:42.997 ]' 00:05:42.997 07:51:48 -- rpc/rpc.sh@17 -- # jq length 00:05:42.997 07:51:48 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:42.997 07:51:48 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:42.997 07:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:42.997 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:43.257 [2024-07-13 07:51:48.812526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:43.257 [2024-07-13 07:51:48.812582] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:43.257 [2024-07-13 07:51:48.812602] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5d4830 00:05:43.257 [2024-07-13 07:51:48.812610] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:43.257 [2024-07-13 07:51:48.813947] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:43.257 [2024-07-13 07:51:48.813998] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:43.257 Passthru0 00:05:43.257 07:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.257 07:51:48 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:43.257 07:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.257 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:43.257 07:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.257 07:51:48 -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:43.257 { 00:05:43.257 "name": "Malloc2", 00:05:43.257 "aliases": [ 00:05:43.257 "39105e44-9c0f-4782-ae78-094f4e89482c" 00:05:43.257 ], 00:05:43.257 "product_name": "Malloc disk", 00:05:43.257 "block_size": 512, 00:05:43.257 "num_blocks": 16384, 00:05:43.257 "uuid": "39105e44-9c0f-4782-ae78-094f4e89482c", 00:05:43.257 "assigned_rate_limits": { 00:05:43.257 "rw_ios_per_sec": 0, 00:05:43.257 "rw_mbytes_per_sec": 0, 00:05:43.257 "r_mbytes_per_sec": 0, 00:05:43.257 "w_mbytes_per_sec": 0 00:05:43.257 }, 00:05:43.257 "claimed": true, 00:05:43.257 "claim_type": "exclusive_write", 00:05:43.257 "zoned": false, 00:05:43.257 "supported_io_types": { 00:05:43.257 "read": true, 00:05:43.257 "write": true, 00:05:43.257 "unmap": true, 00:05:43.257 "write_zeroes": true, 00:05:43.257 "flush": true, 00:05:43.257 "reset": true, 00:05:43.257 "compare": false, 00:05:43.257 "compare_and_write": false, 00:05:43.257 "abort": true, 00:05:43.257 "nvme_admin": false, 00:05:43.257 "nvme_io": false 00:05:43.257 }, 00:05:43.257 "memory_domains": [ 00:05:43.257 { 00:05:43.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.257 "dma_device_type": 2 00:05:43.257 } 00:05:43.257 ], 00:05:43.257 "driver_specific": {} 00:05:43.257 }, 00:05:43.257 { 00:05:43.257 "name": "Passthru0", 00:05:43.257 "aliases": [ 00:05:43.257 "b375ad9e-41e4-528f-87f0-3b10d828f213" 00:05:43.257 ], 00:05:43.257 "product_name": "passthru", 00:05:43.257 "block_size": 512, 00:05:43.257 "num_blocks": 16384, 00:05:43.257 "uuid": "b375ad9e-41e4-528f-87f0-3b10d828f213", 00:05:43.257 "assigned_rate_limits": { 00:05:43.257 "rw_ios_per_sec": 0, 00:05:43.257 "rw_mbytes_per_sec": 0, 00:05:43.257 "r_mbytes_per_sec": 0, 00:05:43.257 "w_mbytes_per_sec": 0 00:05:43.257 }, 00:05:43.257 "claimed": false, 00:05:43.257 "zoned": false, 00:05:43.257 "supported_io_types": { 00:05:43.257 "read": true, 00:05:43.257 "write": true, 00:05:43.257 "unmap": true, 00:05:43.257 "write_zeroes": true, 00:05:43.257 "flush": true, 00:05:43.257 "reset": true, 00:05:43.257 "compare": false, 00:05:43.257 "compare_and_write": false, 00:05:43.257 "abort": true, 00:05:43.257 "nvme_admin": false, 00:05:43.257 "nvme_io": false 00:05:43.257 }, 00:05:43.257 "memory_domains": [ 00:05:43.257 { 00:05:43.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:43.257 "dma_device_type": 2 00:05:43.257 } 00:05:43.257 ], 00:05:43.257 "driver_specific": { 00:05:43.257 "passthru": { 00:05:43.257 "name": "Passthru0", 00:05:43.257 "base_bdev_name": "Malloc2" 00:05:43.257 } 00:05:43.257 } 00:05:43.257 } 00:05:43.257 ]' 00:05:43.257 07:51:48 -- rpc/rpc.sh@21 -- # jq length 00:05:43.257 07:51:48 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:43.257 07:51:48 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:43.257 07:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.257 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:43.257 07:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.257 07:51:48 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:43.257 07:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.257 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:43.257 07:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.257 07:51:48 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:43.257 07:51:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:43.257 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:43.257 07:51:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:43.257 07:51:48 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:43.257 07:51:48 -- rpc/rpc.sh@26 -- # jq length 00:05:43.257 ************************************ 00:05:43.257 END TEST rpc_daemon_integrity 00:05:43.257 ************************************ 00:05:43.257 07:51:48 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:43.257 00:05:43.257 real 0m0.311s 00:05:43.257 user 0m0.206s 00:05:43.257 sys 0m0.043s 00:05:43.257 07:51:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.257 07:51:48 -- common/autotest_common.sh@10 -- # set +x 00:05:43.257 07:51:49 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:43.257 07:51:49 -- rpc/rpc.sh@84 -- # killprocess 65449 00:05:43.257 07:51:49 -- common/autotest_common.sh@926 -- # '[' -z 65449 ']' 00:05:43.257 07:51:49 -- common/autotest_common.sh@930 -- # kill -0 65449 00:05:43.257 07:51:49 -- common/autotest_common.sh@931 -- # uname 00:05:43.257 07:51:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:43.257 07:51:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65449 00:05:43.257 killing process with pid 65449 00:05:43.257 07:51:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:43.257 07:51:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:43.257 07:51:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65449' 00:05:43.257 07:51:49 -- common/autotest_common.sh@945 -- # kill 65449 00:05:43.257 07:51:49 -- common/autotest_common.sh@950 -- # wait 65449 00:05:43.516 00:05:43.516 real 0m2.622s 00:05:43.516 user 0m3.579s 00:05:43.516 sys 0m0.535s 00:05:43.516 07:51:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.516 ************************************ 00:05:43.516 END TEST rpc 00:05:43.516 ************************************ 00:05:43.516 07:51:49 -- common/autotest_common.sh@10 -- # set +x 00:05:43.517 07:51:49 -- spdk/autotest.sh@177 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:43.517 07:51:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.517 07:51:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.517 07:51:49 -- common/autotest_common.sh@10 -- # set +x 00:05:43.517 ************************************ 00:05:43.517 START TEST rpc_client 00:05:43.517 ************************************ 00:05:43.517 07:51:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:43.777 * Looking for test storage... 00:05:43.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:43.777 07:51:49 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:43.777 OK 00:05:43.777 07:51:49 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:43.777 00:05:43.777 real 0m0.103s 00:05:43.777 user 0m0.051s 00:05:43.777 sys 0m0.058s 00:05:43.777 ************************************ 00:05:43.777 END TEST rpc_client 00:05:43.777 ************************************ 00:05:43.777 07:51:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.777 07:51:49 -- common/autotest_common.sh@10 -- # set +x 00:05:43.777 07:51:49 -- spdk/autotest.sh@178 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:43.777 07:51:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:43.777 07:51:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:43.777 07:51:49 -- common/autotest_common.sh@10 -- # set +x 00:05:43.777 ************************************ 00:05:43.777 START TEST json_config 00:05:43.777 ************************************ 00:05:43.777 07:51:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:43.777 07:51:49 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:43.777 07:51:49 -- nvmf/common.sh@7 -- # uname -s 00:05:43.777 07:51:49 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:43.777 07:51:49 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:43.777 07:51:49 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:43.777 07:51:49 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:43.777 07:51:49 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:43.777 07:51:49 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:43.777 07:51:49 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:43.777 07:51:49 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:43.777 07:51:49 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:43.777 07:51:49 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:43.777 07:51:49 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:05:43.777 07:51:49 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:05:43.777 07:51:49 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:43.777 07:51:49 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:43.777 07:51:49 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:43.777 07:51:49 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:43.777 07:51:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:43.777 07:51:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:43.777 07:51:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:43.777 07:51:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.777 07:51:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.777 07:51:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.777 07:51:49 -- paths/export.sh@5 -- # export PATH 00:05:43.777 07:51:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:43.777 07:51:49 -- nvmf/common.sh@46 -- # : 0 00:05:43.777 07:51:49 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:43.777 07:51:49 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:43.777 07:51:49 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:43.777 07:51:49 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:43.777 07:51:49 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:43.777 07:51:49 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:43.777 07:51:49 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:43.777 07:51:49 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:43.777 07:51:49 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:05:43.777 07:51:49 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:05:43.777 07:51:49 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:05:43.777 07:51:49 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:43.777 07:51:49 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:05:43.777 07:51:49 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:05:43.777 07:51:49 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:43.777 07:51:49 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:05:43.777 07:51:49 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:43.777 07:51:49 -- json_config/json_config.sh@32 -- # declare -A app_params 00:05:43.777 07:51:49 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:43.777 07:51:49 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:05:43.777 07:51:49 -- json_config/json_config.sh@43 -- # last_event_id=0 00:05:43.777 07:51:49 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:43.777 07:51:49 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:05:43.777 INFO: JSON configuration test init 00:05:43.777 07:51:49 -- json_config/json_config.sh@420 -- # json_config_test_init 00:05:43.777 07:51:49 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:05:43.777 07:51:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:43.777 07:51:49 -- common/autotest_common.sh@10 -- # set +x 00:05:43.777 07:51:49 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:05:43.777 07:51:49 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:43.777 07:51:49 -- common/autotest_common.sh@10 -- # set +x 00:05:43.777 07:51:49 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:05:43.777 07:51:49 -- json_config/json_config.sh@98 -- # local app=target 00:05:43.777 07:51:49 -- json_config/json_config.sh@99 -- # shift 00:05:43.777 07:51:49 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:43.777 07:51:49 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:43.777 Waiting for target to run... 00:05:43.777 07:51:49 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:43.777 07:51:49 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:43.777 07:51:49 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:43.777 07:51:49 -- json_config/json_config.sh@111 -- # app_pid[$app]=65686 00:05:43.777 07:51:49 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:43.777 07:51:49 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:43.777 07:51:49 -- json_config/json_config.sh@114 -- # waitforlisten 65686 /var/tmp/spdk_tgt.sock 00:05:43.777 07:51:49 -- common/autotest_common.sh@819 -- # '[' -z 65686 ']' 00:05:43.777 07:51:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:43.777 07:51:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:43.777 07:51:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:43.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:43.777 07:51:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:43.777 07:51:49 -- common/autotest_common.sh@10 -- # set +x 00:05:43.777 [2024-07-13 07:51:49.588703] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:43.778 [2024-07-13 07:51:49.589039] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65686 ] 00:05:44.343 [2024-07-13 07:51:49.907402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.343 [2024-07-13 07:51:49.936098] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:44.343 [2024-07-13 07:51:49.936264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.911 00:05:44.911 07:51:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:44.911 07:51:50 -- common/autotest_common.sh@852 -- # return 0 00:05:44.911 07:51:50 -- json_config/json_config.sh@115 -- # echo '' 00:05:44.911 07:51:50 -- json_config/json_config.sh@322 -- # create_accel_config 00:05:44.911 07:51:50 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:05:44.911 07:51:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:44.911 07:51:50 -- common/autotest_common.sh@10 -- # set +x 00:05:44.911 07:51:50 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:05:44.911 07:51:50 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:05:44.911 07:51:50 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:44.911 07:51:50 -- common/autotest_common.sh@10 -- # set +x 00:05:44.911 07:51:50 -- json_config/json_config.sh@326 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:44.911 07:51:50 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:05:44.911 07:51:50 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:45.479 07:51:51 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:05:45.479 07:51:51 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:05:45.479 07:51:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:45.479 07:51:51 -- common/autotest_common.sh@10 -- # set +x 00:05:45.479 07:51:51 -- json_config/json_config.sh@48 -- # local ret=0 00:05:45.479 07:51:51 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:45.479 07:51:51 -- json_config/json_config.sh@49 -- # local enabled_types 00:05:45.479 07:51:51 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:45.479 07:51:51 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:45.479 07:51:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:45.479 07:51:51 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:45.479 07:51:51 -- json_config/json_config.sh@51 -- # local get_types 00:05:45.479 07:51:51 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:45.479 07:51:51 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:05:45.479 07:51:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:45.479 07:51:51 -- common/autotest_common.sh@10 -- # set +x 00:05:45.738 07:51:51 -- json_config/json_config.sh@58 -- # return 0 00:05:45.738 07:51:51 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:05:45.738 07:51:51 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:05:45.738 07:51:51 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:05:45.738 07:51:51 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:05:45.738 07:51:51 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:05:45.738 07:51:51 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:05:45.739 07:51:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:45.739 07:51:51 -- common/autotest_common.sh@10 -- # set +x 00:05:45.739 07:51:51 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:45.739 07:51:51 -- json_config/json_config.sh@286 -- # [[ tcp == \r\d\m\a ]] 00:05:45.739 07:51:51 -- json_config/json_config.sh@290 -- # [[ -z 127.0.0.1 ]] 00:05:45.739 07:51:51 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:45.739 07:51:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:45.998 MallocForNvmf0 00:05:45.998 07:51:51 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:45.998 07:51:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:45.998 MallocForNvmf1 00:05:46.257 07:51:51 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:46.257 07:51:51 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:46.257 [2024-07-13 07:51:52.043878] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:46.257 07:51:52 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:46.257 07:51:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:46.516 07:51:52 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:46.516 07:51:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:46.775 07:51:52 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:46.775 07:51:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:47.033 07:51:52 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:47.033 07:51:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:47.033 [2024-07-13 07:51:52.808273] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:47.033 07:51:52 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:05:47.033 07:51:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:47.033 07:51:52 -- common/autotest_common.sh@10 -- # set +x 00:05:47.291 07:51:52 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:05:47.291 07:51:52 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:47.292 07:51:52 -- common/autotest_common.sh@10 -- # set +x 00:05:47.292 07:51:52 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:05:47.292 07:51:52 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:47.292 07:51:52 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:47.292 MallocBdevForConfigChangeCheck 00:05:47.292 07:51:53 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:05:47.292 07:51:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:47.292 07:51:53 -- common/autotest_common.sh@10 -- # set +x 00:05:47.550 07:51:53 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:05:47.550 07:51:53 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:47.808 INFO: shutting down applications... 00:05:47.808 07:51:53 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:05:47.808 07:51:53 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:05:47.808 07:51:53 -- json_config/json_config.sh@431 -- # json_config_clear target 00:05:47.808 07:51:53 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:05:47.808 07:51:53 -- json_config/json_config.sh@386 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:48.067 Calling clear_iscsi_subsystem 00:05:48.067 Calling clear_nvmf_subsystem 00:05:48.067 Calling clear_nbd_subsystem 00:05:48.067 Calling clear_ublk_subsystem 00:05:48.067 Calling clear_vhost_blk_subsystem 00:05:48.067 Calling clear_vhost_scsi_subsystem 00:05:48.067 Calling clear_scheduler_subsystem 00:05:48.067 Calling clear_bdev_subsystem 00:05:48.067 Calling clear_accel_subsystem 00:05:48.067 Calling clear_vmd_subsystem 00:05:48.067 Calling clear_sock_subsystem 00:05:48.067 Calling clear_iobuf_subsystem 00:05:48.067 07:51:53 -- json_config/json_config.sh@390 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:48.067 07:51:53 -- json_config/json_config.sh@396 -- # count=100 00:05:48.067 07:51:53 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:05:48.067 07:51:53 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.067 07:51:53 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:48.067 07:51:53 -- json_config/json_config.sh@398 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:48.326 07:51:54 -- json_config/json_config.sh@398 -- # break 00:05:48.326 07:51:54 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:05:48.326 07:51:54 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:05:48.326 07:51:54 -- json_config/json_config.sh@120 -- # local app=target 00:05:48.326 07:51:54 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:05:48.326 07:51:54 -- json_config/json_config.sh@124 -- # [[ -n 65686 ]] 00:05:48.326 07:51:54 -- json_config/json_config.sh@127 -- # kill -SIGINT 65686 00:05:48.326 07:51:54 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:05:48.326 07:51:54 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:48.326 07:51:54 -- json_config/json_config.sh@130 -- # kill -0 65686 00:05:48.326 07:51:54 -- json_config/json_config.sh@134 -- # sleep 0.5 00:05:48.894 07:51:54 -- json_config/json_config.sh@129 -- # (( i++ )) 00:05:48.894 07:51:54 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:05:48.894 07:51:54 -- json_config/json_config.sh@130 -- # kill -0 65686 00:05:48.894 07:51:54 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:05:48.894 07:51:54 -- json_config/json_config.sh@132 -- # break 00:05:48.894 07:51:54 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:05:48.894 SPDK target shutdown done 00:05:48.894 07:51:54 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:05:48.894 INFO: relaunching applications... 00:05:48.894 07:51:54 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:05:48.894 07:51:54 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:48.894 07:51:54 -- json_config/json_config.sh@98 -- # local app=target 00:05:48.894 07:51:54 -- json_config/json_config.sh@99 -- # shift 00:05:48.894 07:51:54 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:05:48.894 07:51:54 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:05:48.894 07:51:54 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:05:48.894 07:51:54 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:48.894 07:51:54 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:05:48.894 07:51:54 -- json_config/json_config.sh@111 -- # app_pid[$app]=65871 00:05:48.894 Waiting for target to run... 00:05:48.894 07:51:54 -- json_config/json_config.sh@110 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:48.894 07:51:54 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:05:48.894 07:51:54 -- json_config/json_config.sh@114 -- # waitforlisten 65871 /var/tmp/spdk_tgt.sock 00:05:48.894 07:51:54 -- common/autotest_common.sh@819 -- # '[' -z 65871 ']' 00:05:48.894 07:51:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:48.894 07:51:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:48.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:48.894 07:51:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:48.894 07:51:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:48.894 07:51:54 -- common/autotest_common.sh@10 -- # set +x 00:05:48.894 [2024-07-13 07:51:54.564988] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:48.894 [2024-07-13 07:51:54.565108] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65871 ] 00:05:49.153 [2024-07-13 07:51:54.850864] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.153 [2024-07-13 07:51:54.870661] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:49.153 [2024-07-13 07:51:54.870832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.412 [2024-07-13 07:51:55.161359] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:49.412 [2024-07-13 07:51:55.193428] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:49.979 07:51:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:49.979 07:51:55 -- common/autotest_common.sh@852 -- # return 0 00:05:49.979 00:05:49.979 07:51:55 -- json_config/json_config.sh@115 -- # echo '' 00:05:49.979 07:51:55 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:05:49.979 INFO: Checking if target configuration is the same... 00:05:49.979 07:51:55 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:49.979 07:51:55 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:05:49.979 07:51:55 -- json_config/json_config.sh@441 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:49.979 07:51:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:49.979 + '[' 2 -ne 2 ']' 00:05:49.979 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:49.979 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:49.979 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:49.979 +++ basename /dev/fd/62 00:05:49.979 ++ mktemp /tmp/62.XXX 00:05:49.979 + tmp_file_1=/tmp/62.cuC 00:05:49.979 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:49.979 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:49.979 + tmp_file_2=/tmp/spdk_tgt_config.json.mB8 00:05:49.979 + ret=0 00:05:49.979 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:50.238 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:50.238 + diff -u /tmp/62.cuC /tmp/spdk_tgt_config.json.mB8 00:05:50.238 INFO: JSON config files are the same 00:05:50.238 + echo 'INFO: JSON config files are the same' 00:05:50.238 + rm /tmp/62.cuC /tmp/spdk_tgt_config.json.mB8 00:05:50.238 + exit 0 00:05:50.238 07:51:55 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:05:50.238 INFO: changing configuration and checking if this can be detected... 00:05:50.238 07:51:55 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:50.238 07:51:55 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:50.238 07:51:55 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:50.497 07:51:56 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:05:50.497 07:51:56 -- json_config/json_config.sh@450 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:50.497 07:51:56 -- json_config/json_config.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:50.497 + '[' 2 -ne 2 ']' 00:05:50.497 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:50.497 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:50.497 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:50.497 +++ basename /dev/fd/62 00:05:50.497 ++ mktemp /tmp/62.XXX 00:05:50.497 + tmp_file_1=/tmp/62.3r4 00:05:50.497 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:50.497 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:50.497 + tmp_file_2=/tmp/spdk_tgt_config.json.a9M 00:05:50.497 + ret=0 00:05:50.497 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:50.757 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:51.016 + diff -u /tmp/62.3r4 /tmp/spdk_tgt_config.json.a9M 00:05:51.016 + ret=1 00:05:51.016 + echo '=== Start of file: /tmp/62.3r4 ===' 00:05:51.016 + cat /tmp/62.3r4 00:05:51.016 + echo '=== End of file: /tmp/62.3r4 ===' 00:05:51.016 + echo '' 00:05:51.016 + echo '=== Start of file: /tmp/spdk_tgt_config.json.a9M ===' 00:05:51.016 + cat /tmp/spdk_tgt_config.json.a9M 00:05:51.016 + echo '=== End of file: /tmp/spdk_tgt_config.json.a9M ===' 00:05:51.016 + echo '' 00:05:51.016 + rm /tmp/62.3r4 /tmp/spdk_tgt_config.json.a9M 00:05:51.016 + exit 1 00:05:51.016 INFO: configuration change detected. 00:05:51.016 07:51:56 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:05:51.016 07:51:56 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:05:51.016 07:51:56 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:05:51.016 07:51:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:51.016 07:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:51.016 07:51:56 -- json_config/json_config.sh@360 -- # local ret=0 00:05:51.016 07:51:56 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:05:51.016 07:51:56 -- json_config/json_config.sh@370 -- # [[ -n 65871 ]] 00:05:51.016 07:51:56 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:05:51.016 07:51:56 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:05:51.016 07:51:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:51.016 07:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:51.016 07:51:56 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:05:51.016 07:51:56 -- json_config/json_config.sh@246 -- # uname -s 00:05:51.016 07:51:56 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:05:51.016 07:51:56 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:05:51.016 07:51:56 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:05:51.016 07:51:56 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:05:51.016 07:51:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:51.016 07:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:51.016 07:51:56 -- json_config/json_config.sh@376 -- # killprocess 65871 00:05:51.016 07:51:56 -- common/autotest_common.sh@926 -- # '[' -z 65871 ']' 00:05:51.016 07:51:56 -- common/autotest_common.sh@930 -- # kill -0 65871 00:05:51.016 07:51:56 -- common/autotest_common.sh@931 -- # uname 00:05:51.016 07:51:56 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:51.016 07:51:56 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 65871 00:05:51.016 07:51:56 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:51.016 killing process with pid 65871 00:05:51.016 07:51:56 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:51.016 07:51:56 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 65871' 00:05:51.016 07:51:56 -- common/autotest_common.sh@945 -- # kill 65871 00:05:51.016 07:51:56 -- common/autotest_common.sh@950 -- # wait 65871 00:05:51.016 07:51:56 -- json_config/json_config.sh@379 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:51.016 07:51:56 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:05:51.016 07:51:56 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:51.016 07:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:51.276 INFO: Success 00:05:51.276 07:51:56 -- json_config/json_config.sh@381 -- # return 0 00:05:51.276 07:51:56 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:05:51.276 00:05:51.276 real 0m7.418s 00:05:51.276 user 0m10.531s 00:05:51.276 sys 0m1.350s 00:05:51.276 ************************************ 00:05:51.276 END TEST json_config 00:05:51.276 ************************************ 00:05:51.276 07:51:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:51.276 07:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:51.276 07:51:56 -- spdk/autotest.sh@179 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:51.276 07:51:56 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:51.276 07:51:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:51.276 07:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:51.276 ************************************ 00:05:51.276 START TEST json_config_extra_key 00:05:51.276 ************************************ 00:05:51.276 07:51:56 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:51.276 07:51:56 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:51.276 07:51:56 -- nvmf/common.sh@7 -- # uname -s 00:05:51.276 07:51:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:51.276 07:51:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:51.276 07:51:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:51.276 07:51:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:51.276 07:51:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:51.276 07:51:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:51.276 07:51:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:51.276 07:51:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:51.277 07:51:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:51.277 07:51:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:51.277 07:51:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:05:51.277 07:51:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:05:51.277 07:51:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:51.277 07:51:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:51.277 07:51:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:51.277 07:51:56 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:51.277 07:51:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:51.277 07:51:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:51.277 07:51:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:51.277 07:51:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.277 07:51:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.277 07:51:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.277 07:51:56 -- paths/export.sh@5 -- # export PATH 00:05:51.277 07:51:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:51.277 07:51:56 -- nvmf/common.sh@46 -- # : 0 00:05:51.277 07:51:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:05:51.277 07:51:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:05:51.277 07:51:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:05:51.277 07:51:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:51.277 07:51:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:51.277 07:51:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:05:51.277 07:51:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:05:51.277 07:51:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:05:51.277 INFO: launching applications... 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@25 -- # shift 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:05:51.277 Waiting for target to run... 00:05:51.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=66005 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 66005 /var/tmp/spdk_tgt.sock 00:05:51.277 07:51:56 -- common/autotest_common.sh@819 -- # '[' -z 66005 ']' 00:05:51.277 07:51:56 -- json_config/json_config_extra_key.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:51.277 07:51:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:51.277 07:51:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:51.277 07:51:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:51.277 07:51:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:51.277 07:51:56 -- common/autotest_common.sh@10 -- # set +x 00:05:51.277 [2024-07-13 07:51:57.037480] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:51.277 [2024-07-13 07:51:57.037566] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66005 ] 00:05:51.536 [2024-07-13 07:51:57.322936] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.536 [2024-07-13 07:51:57.341961] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:51.536 [2024-07-13 07:51:57.342360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.473 07:51:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:52.473 07:51:57 -- common/autotest_common.sh@852 -- # return 0 00:05:52.473 07:51:57 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:05:52.473 00:05:52.473 INFO: shutting down applications... 00:05:52.473 07:51:57 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:05:52.473 07:51:57 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:05:52.473 07:51:57 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:05:52.473 07:51:57 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:05:52.473 07:51:57 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 66005 ]] 00:05:52.473 07:51:57 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 66005 00:05:52.473 07:51:57 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:05:52.473 07:51:57 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:52.473 07:51:57 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66005 00:05:52.473 07:51:57 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:05:52.732 07:51:58 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:05:52.732 07:51:58 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:05:52.732 07:51:58 -- json_config/json_config_extra_key.sh@50 -- # kill -0 66005 00:05:52.732 07:51:58 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:05:52.732 07:51:58 -- json_config/json_config_extra_key.sh@52 -- # break 00:05:52.732 07:51:58 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:05:52.732 07:51:58 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:05:52.732 SPDK target shutdown done 00:05:52.732 Success 00:05:52.732 07:51:58 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:05:52.732 00:05:52.732 real 0m1.582s 00:05:52.732 user 0m1.423s 00:05:52.732 sys 0m0.266s 00:05:52.732 ************************************ 00:05:52.732 END TEST json_config_extra_key 00:05:52.732 ************************************ 00:05:52.732 07:51:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.732 07:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:52.732 07:51:58 -- spdk/autotest.sh@180 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:52.732 07:51:58 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:52.732 07:51:58 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:52.732 07:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:52.991 ************************************ 00:05:52.991 START TEST alias_rpc 00:05:52.991 ************************************ 00:05:52.991 07:51:58 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:52.991 * Looking for test storage... 00:05:52.991 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:52.991 07:51:58 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:52.991 07:51:58 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=66074 00:05:52.991 07:51:58 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:52.991 07:51:58 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 66074 00:05:52.991 07:51:58 -- common/autotest_common.sh@819 -- # '[' -z 66074 ']' 00:05:52.991 07:51:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.991 07:51:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:52.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.991 07:51:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.991 07:51:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:52.991 07:51:58 -- common/autotest_common.sh@10 -- # set +x 00:05:52.991 [2024-07-13 07:51:58.692954] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:52.991 [2024-07-13 07:51:58.693054] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66074 ] 00:05:53.250 [2024-07-13 07:51:58.830539] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.250 [2024-07-13 07:51:58.865031] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:53.250 [2024-07-13 07:51:58.865187] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.818 07:51:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:53.818 07:51:59 -- common/autotest_common.sh@852 -- # return 0 00:05:53.818 07:51:59 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:54.077 07:51:59 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 66074 00:05:54.077 07:51:59 -- common/autotest_common.sh@926 -- # '[' -z 66074 ']' 00:05:54.077 07:51:59 -- common/autotest_common.sh@930 -- # kill -0 66074 00:05:54.077 07:51:59 -- common/autotest_common.sh@931 -- # uname 00:05:54.077 07:51:59 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:54.077 07:51:59 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66074 00:05:54.335 killing process with pid 66074 00:05:54.335 07:51:59 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:54.335 07:51:59 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:54.335 07:51:59 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66074' 00:05:54.335 07:51:59 -- common/autotest_common.sh@945 -- # kill 66074 00:05:54.335 07:51:59 -- common/autotest_common.sh@950 -- # wait 66074 00:05:54.335 ************************************ 00:05:54.335 END TEST alias_rpc 00:05:54.335 ************************************ 00:05:54.335 00:05:54.335 real 0m1.568s 00:05:54.335 user 0m1.869s 00:05:54.335 sys 0m0.317s 00:05:54.335 07:52:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.335 07:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:54.594 07:52:00 -- spdk/autotest.sh@182 -- # [[ 0 -eq 0 ]] 00:05:54.594 07:52:00 -- spdk/autotest.sh@183 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:54.594 07:52:00 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:54.594 07:52:00 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:54.594 07:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:54.594 ************************************ 00:05:54.594 START TEST spdkcli_tcp 00:05:54.594 ************************************ 00:05:54.594 07:52:00 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:54.594 * Looking for test storage... 00:05:54.594 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:54.594 07:52:00 -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:54.594 07:52:00 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:54.594 07:52:00 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:54.594 07:52:00 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:54.594 07:52:00 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:54.594 07:52:00 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:54.594 07:52:00 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:54.594 07:52:00 -- common/autotest_common.sh@712 -- # xtrace_disable 00:05:54.594 07:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:54.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.594 07:52:00 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=66138 00:05:54.594 07:52:00 -- spdkcli/tcp.sh@27 -- # waitforlisten 66138 00:05:54.595 07:52:00 -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:54.595 07:52:00 -- common/autotest_common.sh@819 -- # '[' -z 66138 ']' 00:05:54.595 07:52:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.595 07:52:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:54.595 07:52:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.595 07:52:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:54.595 07:52:00 -- common/autotest_common.sh@10 -- # set +x 00:05:54.595 [2024-07-13 07:52:00.314278] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:54.595 [2024-07-13 07:52:00.314390] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66138 ] 00:05:54.854 [2024-07-13 07:52:00.449488] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:54.854 [2024-07-13 07:52:00.484748] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:54.854 [2024-07-13 07:52:00.485021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.854 [2024-07-13 07:52:00.485030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.423 07:52:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:55.423 07:52:01 -- common/autotest_common.sh@852 -- # return 0 00:05:55.423 07:52:01 -- spdkcli/tcp.sh@31 -- # socat_pid=66155 00:05:55.423 07:52:01 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:55.423 07:52:01 -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:55.682 [ 00:05:55.682 "bdev_malloc_delete", 00:05:55.682 "bdev_malloc_create", 00:05:55.682 "bdev_null_resize", 00:05:55.682 "bdev_null_delete", 00:05:55.682 "bdev_null_create", 00:05:55.682 "bdev_nvme_cuse_unregister", 00:05:55.682 "bdev_nvme_cuse_register", 00:05:55.682 "bdev_opal_new_user", 00:05:55.682 "bdev_opal_set_lock_state", 00:05:55.682 "bdev_opal_delete", 00:05:55.682 "bdev_opal_get_info", 00:05:55.682 "bdev_opal_create", 00:05:55.682 "bdev_nvme_opal_revert", 00:05:55.682 "bdev_nvme_opal_init", 00:05:55.682 "bdev_nvme_send_cmd", 00:05:55.682 "bdev_nvme_get_path_iostat", 00:05:55.682 "bdev_nvme_get_mdns_discovery_info", 00:05:55.682 "bdev_nvme_stop_mdns_discovery", 00:05:55.682 "bdev_nvme_start_mdns_discovery", 00:05:55.682 "bdev_nvme_set_multipath_policy", 00:05:55.682 "bdev_nvme_set_preferred_path", 00:05:55.682 "bdev_nvme_get_io_paths", 00:05:55.682 "bdev_nvme_remove_error_injection", 00:05:55.682 "bdev_nvme_add_error_injection", 00:05:55.682 "bdev_nvme_get_discovery_info", 00:05:55.682 "bdev_nvme_stop_discovery", 00:05:55.682 "bdev_nvme_start_discovery", 00:05:55.682 "bdev_nvme_get_controller_health_info", 00:05:55.682 "bdev_nvme_disable_controller", 00:05:55.682 "bdev_nvme_enable_controller", 00:05:55.682 "bdev_nvme_reset_controller", 00:05:55.682 "bdev_nvme_get_transport_statistics", 00:05:55.682 "bdev_nvme_apply_firmware", 00:05:55.682 "bdev_nvme_detach_controller", 00:05:55.682 "bdev_nvme_get_controllers", 00:05:55.682 "bdev_nvme_attach_controller", 00:05:55.682 "bdev_nvme_set_hotplug", 00:05:55.682 "bdev_nvme_set_options", 00:05:55.682 "bdev_passthru_delete", 00:05:55.682 "bdev_passthru_create", 00:05:55.682 "bdev_lvol_grow_lvstore", 00:05:55.682 "bdev_lvol_get_lvols", 00:05:55.682 "bdev_lvol_get_lvstores", 00:05:55.682 "bdev_lvol_delete", 00:05:55.682 "bdev_lvol_set_read_only", 00:05:55.682 "bdev_lvol_resize", 00:05:55.682 "bdev_lvol_decouple_parent", 00:05:55.682 "bdev_lvol_inflate", 00:05:55.682 "bdev_lvol_rename", 00:05:55.682 "bdev_lvol_clone_bdev", 00:05:55.682 "bdev_lvol_clone", 00:05:55.682 "bdev_lvol_snapshot", 00:05:55.682 "bdev_lvol_create", 00:05:55.682 "bdev_lvol_delete_lvstore", 00:05:55.682 "bdev_lvol_rename_lvstore", 00:05:55.682 "bdev_lvol_create_lvstore", 00:05:55.682 "bdev_raid_set_options", 00:05:55.682 "bdev_raid_remove_base_bdev", 00:05:55.682 "bdev_raid_add_base_bdev", 00:05:55.682 "bdev_raid_delete", 00:05:55.682 "bdev_raid_create", 00:05:55.682 "bdev_raid_get_bdevs", 00:05:55.682 "bdev_error_inject_error", 00:05:55.682 "bdev_error_delete", 00:05:55.682 "bdev_error_create", 00:05:55.682 "bdev_split_delete", 00:05:55.682 "bdev_split_create", 00:05:55.682 "bdev_delay_delete", 00:05:55.682 "bdev_delay_create", 00:05:55.682 "bdev_delay_update_latency", 00:05:55.682 "bdev_zone_block_delete", 00:05:55.682 "bdev_zone_block_create", 00:05:55.682 "blobfs_create", 00:05:55.682 "blobfs_detect", 00:05:55.682 "blobfs_set_cache_size", 00:05:55.682 "bdev_aio_delete", 00:05:55.682 "bdev_aio_rescan", 00:05:55.682 "bdev_aio_create", 00:05:55.682 "bdev_ftl_set_property", 00:05:55.682 "bdev_ftl_get_properties", 00:05:55.682 "bdev_ftl_get_stats", 00:05:55.682 "bdev_ftl_unmap", 00:05:55.682 "bdev_ftl_unload", 00:05:55.682 "bdev_ftl_delete", 00:05:55.682 "bdev_ftl_load", 00:05:55.682 "bdev_ftl_create", 00:05:55.682 "bdev_virtio_attach_controller", 00:05:55.682 "bdev_virtio_scsi_get_devices", 00:05:55.682 "bdev_virtio_detach_controller", 00:05:55.682 "bdev_virtio_blk_set_hotplug", 00:05:55.682 "bdev_iscsi_delete", 00:05:55.682 "bdev_iscsi_create", 00:05:55.682 "bdev_iscsi_set_options", 00:05:55.682 "bdev_uring_delete", 00:05:55.682 "bdev_uring_create", 00:05:55.682 "accel_error_inject_error", 00:05:55.682 "ioat_scan_accel_module", 00:05:55.682 "dsa_scan_accel_module", 00:05:55.682 "iaa_scan_accel_module", 00:05:55.682 "iscsi_set_options", 00:05:55.682 "iscsi_get_auth_groups", 00:05:55.682 "iscsi_auth_group_remove_secret", 00:05:55.682 "iscsi_auth_group_add_secret", 00:05:55.682 "iscsi_delete_auth_group", 00:05:55.682 "iscsi_create_auth_group", 00:05:55.682 "iscsi_set_discovery_auth", 00:05:55.682 "iscsi_get_options", 00:05:55.682 "iscsi_target_node_request_logout", 00:05:55.682 "iscsi_target_node_set_redirect", 00:05:55.682 "iscsi_target_node_set_auth", 00:05:55.682 "iscsi_target_node_add_lun", 00:05:55.682 "iscsi_get_connections", 00:05:55.682 "iscsi_portal_group_set_auth", 00:05:55.682 "iscsi_start_portal_group", 00:05:55.682 "iscsi_delete_portal_group", 00:05:55.682 "iscsi_create_portal_group", 00:05:55.682 "iscsi_get_portal_groups", 00:05:55.682 "iscsi_delete_target_node", 00:05:55.682 "iscsi_target_node_remove_pg_ig_maps", 00:05:55.682 "iscsi_target_node_add_pg_ig_maps", 00:05:55.682 "iscsi_create_target_node", 00:05:55.682 "iscsi_get_target_nodes", 00:05:55.682 "iscsi_delete_initiator_group", 00:05:55.682 "iscsi_initiator_group_remove_initiators", 00:05:55.682 "iscsi_initiator_group_add_initiators", 00:05:55.682 "iscsi_create_initiator_group", 00:05:55.682 "iscsi_get_initiator_groups", 00:05:55.682 "nvmf_set_crdt", 00:05:55.682 "nvmf_set_config", 00:05:55.682 "nvmf_set_max_subsystems", 00:05:55.682 "nvmf_subsystem_get_listeners", 00:05:55.682 "nvmf_subsystem_get_qpairs", 00:05:55.682 "nvmf_subsystem_get_controllers", 00:05:55.682 "nvmf_get_stats", 00:05:55.682 "nvmf_get_transports", 00:05:55.682 "nvmf_create_transport", 00:05:55.682 "nvmf_get_targets", 00:05:55.682 "nvmf_delete_target", 00:05:55.682 "nvmf_create_target", 00:05:55.682 "nvmf_subsystem_allow_any_host", 00:05:55.682 "nvmf_subsystem_remove_host", 00:05:55.682 "nvmf_subsystem_add_host", 00:05:55.682 "nvmf_subsystem_remove_ns", 00:05:55.682 "nvmf_subsystem_add_ns", 00:05:55.682 "nvmf_subsystem_listener_set_ana_state", 00:05:55.682 "nvmf_discovery_get_referrals", 00:05:55.682 "nvmf_discovery_remove_referral", 00:05:55.682 "nvmf_discovery_add_referral", 00:05:55.682 "nvmf_subsystem_remove_listener", 00:05:55.682 "nvmf_subsystem_add_listener", 00:05:55.682 "nvmf_delete_subsystem", 00:05:55.682 "nvmf_create_subsystem", 00:05:55.682 "nvmf_get_subsystems", 00:05:55.682 "env_dpdk_get_mem_stats", 00:05:55.682 "nbd_get_disks", 00:05:55.682 "nbd_stop_disk", 00:05:55.682 "nbd_start_disk", 00:05:55.682 "ublk_recover_disk", 00:05:55.682 "ublk_get_disks", 00:05:55.682 "ublk_stop_disk", 00:05:55.682 "ublk_start_disk", 00:05:55.682 "ublk_destroy_target", 00:05:55.682 "ublk_create_target", 00:05:55.682 "virtio_blk_create_transport", 00:05:55.682 "virtio_blk_get_transports", 00:05:55.682 "vhost_controller_set_coalescing", 00:05:55.682 "vhost_get_controllers", 00:05:55.682 "vhost_delete_controller", 00:05:55.682 "vhost_create_blk_controller", 00:05:55.682 "vhost_scsi_controller_remove_target", 00:05:55.682 "vhost_scsi_controller_add_target", 00:05:55.682 "vhost_start_scsi_controller", 00:05:55.682 "vhost_create_scsi_controller", 00:05:55.682 "thread_set_cpumask", 00:05:55.682 "framework_get_scheduler", 00:05:55.682 "framework_set_scheduler", 00:05:55.682 "framework_get_reactors", 00:05:55.682 "thread_get_io_channels", 00:05:55.682 "thread_get_pollers", 00:05:55.682 "thread_get_stats", 00:05:55.682 "framework_monitor_context_switch", 00:05:55.682 "spdk_kill_instance", 00:05:55.682 "log_enable_timestamps", 00:05:55.682 "log_get_flags", 00:05:55.682 "log_clear_flag", 00:05:55.682 "log_set_flag", 00:05:55.682 "log_get_level", 00:05:55.682 "log_set_level", 00:05:55.682 "log_get_print_level", 00:05:55.682 "log_set_print_level", 00:05:55.682 "framework_enable_cpumask_locks", 00:05:55.682 "framework_disable_cpumask_locks", 00:05:55.682 "framework_wait_init", 00:05:55.682 "framework_start_init", 00:05:55.682 "scsi_get_devices", 00:05:55.682 "bdev_get_histogram", 00:05:55.682 "bdev_enable_histogram", 00:05:55.682 "bdev_set_qos_limit", 00:05:55.682 "bdev_set_qd_sampling_period", 00:05:55.682 "bdev_get_bdevs", 00:05:55.682 "bdev_reset_iostat", 00:05:55.682 "bdev_get_iostat", 00:05:55.682 "bdev_examine", 00:05:55.682 "bdev_wait_for_examine", 00:05:55.682 "bdev_set_options", 00:05:55.682 "notify_get_notifications", 00:05:55.682 "notify_get_types", 00:05:55.682 "accel_get_stats", 00:05:55.682 "accel_set_options", 00:05:55.682 "accel_set_driver", 00:05:55.682 "accel_crypto_key_destroy", 00:05:55.682 "accel_crypto_keys_get", 00:05:55.682 "accel_crypto_key_create", 00:05:55.682 "accel_assign_opc", 00:05:55.682 "accel_get_module_info", 00:05:55.682 "accel_get_opc_assignments", 00:05:55.682 "vmd_rescan", 00:05:55.682 "vmd_remove_device", 00:05:55.682 "vmd_enable", 00:05:55.682 "sock_set_default_impl", 00:05:55.682 "sock_impl_set_options", 00:05:55.682 "sock_impl_get_options", 00:05:55.682 "iobuf_get_stats", 00:05:55.682 "iobuf_set_options", 00:05:55.682 "framework_get_pci_devices", 00:05:55.682 "framework_get_config", 00:05:55.682 "framework_get_subsystems", 00:05:55.682 "trace_get_info", 00:05:55.682 "trace_get_tpoint_group_mask", 00:05:55.682 "trace_disable_tpoint_group", 00:05:55.682 "trace_enable_tpoint_group", 00:05:55.682 "trace_clear_tpoint_mask", 00:05:55.682 "trace_set_tpoint_mask", 00:05:55.682 "spdk_get_version", 00:05:55.682 "rpc_get_methods" 00:05:55.682 ] 00:05:55.682 07:52:01 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:55.682 07:52:01 -- common/autotest_common.sh@718 -- # xtrace_disable 00:05:55.682 07:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:55.682 07:52:01 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:55.682 07:52:01 -- spdkcli/tcp.sh@38 -- # killprocess 66138 00:05:55.682 07:52:01 -- common/autotest_common.sh@926 -- # '[' -z 66138 ']' 00:05:55.682 07:52:01 -- common/autotest_common.sh@930 -- # kill -0 66138 00:05:55.682 07:52:01 -- common/autotest_common.sh@931 -- # uname 00:05:55.683 07:52:01 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:55.683 07:52:01 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66138 00:05:55.941 killing process with pid 66138 00:05:55.941 07:52:01 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:55.941 07:52:01 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:55.941 07:52:01 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66138' 00:05:55.941 07:52:01 -- common/autotest_common.sh@945 -- # kill 66138 00:05:55.941 07:52:01 -- common/autotest_common.sh@950 -- # wait 66138 00:05:55.941 ************************************ 00:05:55.941 END TEST spdkcli_tcp 00:05:55.941 ************************************ 00:05:55.941 00:05:55.941 real 0m1.564s 00:05:55.941 user 0m3.007s 00:05:55.941 sys 0m0.346s 00:05:55.941 07:52:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:55.941 07:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:56.217 07:52:01 -- spdk/autotest.sh@186 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:56.217 07:52:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:56.217 07:52:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:56.217 07:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:56.217 ************************************ 00:05:56.217 START TEST dpdk_mem_utility 00:05:56.217 ************************************ 00:05:56.217 07:52:01 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:56.217 * Looking for test storage... 00:05:56.217 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:56.217 07:52:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:56.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.217 07:52:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=66228 00:05:56.217 07:52:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:56.217 07:52:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 66228 00:05:56.217 07:52:01 -- common/autotest_common.sh@819 -- # '[' -z 66228 ']' 00:05:56.217 07:52:01 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.217 07:52:01 -- common/autotest_common.sh@824 -- # local max_retries=100 00:05:56.217 07:52:01 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.217 07:52:01 -- common/autotest_common.sh@828 -- # xtrace_disable 00:05:56.217 07:52:01 -- common/autotest_common.sh@10 -- # set +x 00:05:56.217 [2024-07-13 07:52:01.918152] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:56.217 [2024-07-13 07:52:01.918487] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66228 ] 00:05:56.498 [2024-07-13 07:52:02.051825] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.498 [2024-07-13 07:52:02.085261] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:56.498 [2024-07-13 07:52:02.085705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.079 07:52:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:05:57.079 07:52:02 -- common/autotest_common.sh@852 -- # return 0 00:05:57.079 07:52:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:57.079 07:52:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:57.079 07:52:02 -- common/autotest_common.sh@551 -- # xtrace_disable 00:05:57.079 07:52:02 -- common/autotest_common.sh@10 -- # set +x 00:05:57.079 { 00:05:57.079 "filename": "/tmp/spdk_mem_dump.txt" 00:05:57.079 } 00:05:57.079 07:52:02 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:05:57.079 07:52:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:57.340 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:57.340 1 heaps totaling size 814.000000 MiB 00:05:57.340 size: 814.000000 MiB heap id: 0 00:05:57.340 end heaps---------- 00:05:57.340 8 mempools totaling size 598.116089 MiB 00:05:57.340 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:57.340 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:57.340 size: 84.521057 MiB name: bdev_io_66228 00:05:57.340 size: 51.011292 MiB name: evtpool_66228 00:05:57.340 size: 50.003479 MiB name: msgpool_66228 00:05:57.340 size: 21.763794 MiB name: PDU_Pool 00:05:57.340 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:57.340 size: 0.026123 MiB name: Session_Pool 00:05:57.340 end mempools------- 00:05:57.340 6 memzones totaling size 4.142822 MiB 00:05:57.340 size: 1.000366 MiB name: RG_ring_0_66228 00:05:57.340 size: 1.000366 MiB name: RG_ring_1_66228 00:05:57.340 size: 1.000366 MiB name: RG_ring_4_66228 00:05:57.340 size: 1.000366 MiB name: RG_ring_5_66228 00:05:57.340 size: 0.125366 MiB name: RG_ring_2_66228 00:05:57.340 size: 0.015991 MiB name: RG_ring_3_66228 00:05:57.340 end memzones------- 00:05:57.340 07:52:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:57.340 heap id: 0 total size: 814.000000 MiB number of busy elements: 303 number of free elements: 15 00:05:57.340 list of free elements. size: 12.471375 MiB 00:05:57.340 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:57.340 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:57.340 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:57.340 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:57.340 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:57.340 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:57.340 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:57.340 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:57.340 element at address: 0x200000200000 with size: 0.832825 MiB 00:05:57.340 element at address: 0x20001aa00000 with size: 0.569153 MiB 00:05:57.340 element at address: 0x20000b200000 with size: 0.488892 MiB 00:05:57.340 element at address: 0x200000800000 with size: 0.486145 MiB 00:05:57.340 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:57.340 element at address: 0x200027e00000 with size: 0.395752 MiB 00:05:57.340 element at address: 0x200003a00000 with size: 0.347839 MiB 00:05:57.340 list of standard malloc elements. size: 199.266052 MiB 00:05:57.340 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:57.340 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:57.340 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:57.340 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:57.340 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:57.340 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:57.340 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:57.340 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:57.340 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:57.340 element at address: 0x2000002d5340 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d5400 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d54c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d5580 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d5640 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d5700 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d57c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d5880 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d5940 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d5a00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d5ac0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d5b80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d5c40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d5d00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d5dc0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d5e80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d5f40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d6000 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d60c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d6180 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d6240 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d6300 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d63c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d68c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d6980 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d6a40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d6b00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d6bc0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d6c80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d6d40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d6e00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d6ec0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d6f80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d7040 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d7100 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000087c740 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000087c800 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000087c8c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000087c980 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000087ca40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a590c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59180 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59240 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59300 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a593c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59480 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59540 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59600 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a596c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59780 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59840 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59900 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a599c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59a80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59b40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59c00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59cc0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59d80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59e40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59f00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a59fc0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000b27d280 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000b27d340 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000b27d400 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000b27d4c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000b27d580 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000b27d640 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa91b40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa91c00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa91cc0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa91d80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa91e40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa91f00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa91fc0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92080 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92140 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92200 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa922c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92380 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92440 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92500 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa925c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92680 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92740 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92800 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa928c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92980 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:57.340 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e65500 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e655c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6c1c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6c3c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6c480 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6c540 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6c600 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6c6c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6c780 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6c840 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6c900 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6c9c0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6ca80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6cb40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6cc00 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6ccc0 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:05:57.340 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:57.341 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:57.341 list of memzone associated elements. size: 602.262573 MiB 00:05:57.341 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:57.341 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:57.341 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:57.341 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:57.341 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:57.341 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_66228_0 00:05:57.341 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:57.341 associated memzone info: size: 48.002930 MiB name: MP_evtpool_66228_0 00:05:57.341 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:57.341 associated memzone info: size: 48.002930 MiB name: MP_msgpool_66228_0 00:05:57.341 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:57.341 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:57.341 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:57.341 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:57.341 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:57.341 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_66228 00:05:57.341 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:57.341 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_66228 00:05:57.341 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:57.341 associated memzone info: size: 1.007996 MiB name: MP_evtpool_66228 00:05:57.341 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:57.341 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:57.341 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:57.341 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:57.341 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:57.341 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:57.341 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:57.341 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:57.341 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:57.341 associated memzone info: size: 1.000366 MiB name: RG_ring_0_66228 00:05:57.341 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:57.341 associated memzone info: size: 1.000366 MiB name: RG_ring_1_66228 00:05:57.341 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:57.341 associated memzone info: size: 1.000366 MiB name: RG_ring_4_66228 00:05:57.341 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:57.341 associated memzone info: size: 1.000366 MiB name: RG_ring_5_66228 00:05:57.341 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:57.341 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_66228 00:05:57.341 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:57.341 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:57.341 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:57.341 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:57.341 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:57.341 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:57.341 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:57.341 associated memzone info: size: 0.125366 MiB name: RG_ring_2_66228 00:05:57.341 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:57.341 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:57.341 element at address: 0x200027e65680 with size: 0.023743 MiB 00:05:57.341 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:57.341 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:57.341 associated memzone info: size: 0.015991 MiB name: RG_ring_3_66228 00:05:57.341 element at address: 0x200027e6b7c0 with size: 0.002441 MiB 00:05:57.341 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:57.341 element at address: 0x2000002d6780 with size: 0.000305 MiB 00:05:57.341 associated memzone info: size: 0.000183 MiB name: MP_msgpool_66228 00:05:57.341 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:57.341 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_66228 00:05:57.341 element at address: 0x200027e6c280 with size: 0.000305 MiB 00:05:57.341 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:57.341 07:52:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:57.341 07:52:02 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 66228 00:05:57.341 07:52:02 -- common/autotest_common.sh@926 -- # '[' -z 66228 ']' 00:05:57.341 07:52:02 -- common/autotest_common.sh@930 -- # kill -0 66228 00:05:57.341 07:52:02 -- common/autotest_common.sh@931 -- # uname 00:05:57.341 07:52:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:05:57.341 07:52:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66228 00:05:57.341 killing process with pid 66228 00:05:57.341 07:52:03 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:05:57.341 07:52:03 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:05:57.341 07:52:03 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66228' 00:05:57.341 07:52:03 -- common/autotest_common.sh@945 -- # kill 66228 00:05:57.341 07:52:03 -- common/autotest_common.sh@950 -- # wait 66228 00:05:57.600 ************************************ 00:05:57.600 END TEST dpdk_mem_utility 00:05:57.600 ************************************ 00:05:57.600 00:05:57.600 real 0m1.472s 00:05:57.600 user 0m1.675s 00:05:57.600 sys 0m0.317s 00:05:57.600 07:52:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.600 07:52:03 -- common/autotest_common.sh@10 -- # set +x 00:05:57.600 07:52:03 -- spdk/autotest.sh@187 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:57.600 07:52:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:05:57.600 07:52:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.600 07:52:03 -- common/autotest_common.sh@10 -- # set +x 00:05:57.600 ************************************ 00:05:57.600 START TEST event 00:05:57.600 ************************************ 00:05:57.600 07:52:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:57.600 * Looking for test storage... 00:05:57.600 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:57.600 07:52:03 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:57.600 07:52:03 -- bdev/nbd_common.sh@6 -- # set -e 00:05:57.600 07:52:03 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:57.600 07:52:03 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:05:57.600 07:52:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:57.600 07:52:03 -- common/autotest_common.sh@10 -- # set +x 00:05:57.600 ************************************ 00:05:57.600 START TEST event_perf 00:05:57.600 ************************************ 00:05:57.600 07:52:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:57.600 Running I/O for 1 seconds...[2024-07-13 07:52:03.415099] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:57.858 [2024-07-13 07:52:03.415192] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66293 ] 00:05:57.859 [2024-07-13 07:52:03.550856] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:57.859 [2024-07-13 07:52:03.583552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:57.859 [2024-07-13 07:52:03.583663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.859 [2024-07-13 07:52:03.583759] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.859 [2024-07-13 07:52:03.583761] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.246 Running I/O for 1 seconds... 00:05:59.246 lcore 0: 199405 00:05:59.246 lcore 1: 199406 00:05:59.246 lcore 2: 199406 00:05:59.246 lcore 3: 199406 00:05:59.246 done. 00:05:59.246 ************************************ 00:05:59.246 END TEST event_perf 00:05:59.246 ************************************ 00:05:59.246 00:05:59.246 real 0m1.237s 00:05:59.246 user 0m4.067s 00:05:59.246 sys 0m0.052s 00:05:59.246 07:52:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.246 07:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:59.246 07:52:04 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:59.246 07:52:04 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:05:59.246 07:52:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:05:59.246 07:52:04 -- common/autotest_common.sh@10 -- # set +x 00:05:59.246 ************************************ 00:05:59.246 START TEST event_reactor 00:05:59.246 ************************************ 00:05:59.246 07:52:04 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:59.246 [2024-07-13 07:52:04.691026] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:05:59.246 [2024-07-13 07:52:04.691107] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66337 ] 00:05:59.246 [2024-07-13 07:52:04.822085] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.246 [2024-07-13 07:52:04.853456] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.181 test_start 00:06:00.181 oneshot 00:06:00.181 tick 100 00:06:00.181 tick 100 00:06:00.181 tick 250 00:06:00.181 tick 100 00:06:00.181 tick 100 00:06:00.181 tick 500 00:06:00.181 tick 100 00:06:00.181 tick 250 00:06:00.181 tick 100 00:06:00.181 tick 100 00:06:00.181 tick 250 00:06:00.181 tick 100 00:06:00.181 tick 100 00:06:00.181 test_end 00:06:00.181 00:06:00.181 real 0m1.221s 00:06:00.181 user 0m1.084s 00:06:00.181 sys 0m0.033s 00:06:00.181 07:52:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.181 07:52:05 -- common/autotest_common.sh@10 -- # set +x 00:06:00.181 ************************************ 00:06:00.181 END TEST event_reactor 00:06:00.181 ************************************ 00:06:00.182 07:52:05 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:00.182 07:52:05 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:06:00.182 07:52:05 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:00.182 07:52:05 -- common/autotest_common.sh@10 -- # set +x 00:06:00.182 ************************************ 00:06:00.182 START TEST event_reactor_perf 00:06:00.182 ************************************ 00:06:00.182 07:52:05 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:00.182 [2024-07-13 07:52:05.973080] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:00.182 [2024-07-13 07:52:05.973167] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66367 ] 00:06:00.440 [2024-07-13 07:52:06.109701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.440 [2024-07-13 07:52:06.150268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.816 test_start 00:06:01.816 test_end 00:06:01.816 Performance: 414153 events per second 00:06:01.816 00:06:01.816 real 0m1.253s 00:06:01.816 user 0m1.099s 00:06:01.816 sys 0m0.048s 00:06:01.816 ************************************ 00:06:01.816 END TEST event_reactor_perf 00:06:01.816 07:52:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:01.816 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:01.816 ************************************ 00:06:01.816 07:52:07 -- event/event.sh@49 -- # uname -s 00:06:01.816 07:52:07 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:01.816 07:52:07 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:01.816 07:52:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:01.816 07:52:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:01.816 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:01.816 ************************************ 00:06:01.816 START TEST event_scheduler 00:06:01.816 ************************************ 00:06:01.816 07:52:07 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:01.816 * Looking for test storage... 00:06:01.816 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:01.816 07:52:07 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:01.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.816 07:52:07 -- scheduler/scheduler.sh@35 -- # scheduler_pid=66422 00:06:01.816 07:52:07 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:01.816 07:52:07 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:01.816 07:52:07 -- scheduler/scheduler.sh@37 -- # waitforlisten 66422 00:06:01.816 07:52:07 -- common/autotest_common.sh@819 -- # '[' -z 66422 ']' 00:06:01.816 07:52:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.816 07:52:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:01.816 07:52:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.816 07:52:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:01.816 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:01.816 [2024-07-13 07:52:07.386162] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:01.816 [2024-07-13 07:52:07.386478] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66422 ] 00:06:01.816 [2024-07-13 07:52:07.526034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:01.816 [2024-07-13 07:52:07.576277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.816 [2024-07-13 07:52:07.576325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.816 [2024-07-13 07:52:07.576434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.816 [2024-07-13 07:52:07.576442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.816 07:52:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:01.816 07:52:07 -- common/autotest_common.sh@852 -- # return 0 00:06:01.816 07:52:07 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:01.816 07:52:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:01.816 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 POWER: Env isn't set yet! 00:06:02.075 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:02.075 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:02.075 POWER: Cannot set governor of lcore 0 to userspace 00:06:02.075 POWER: Attempting to initialise PSTAT power management... 00:06:02.075 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:02.075 POWER: Cannot set governor of lcore 0 to performance 00:06:02.075 POWER: Attempting to initialise CPPC power management... 00:06:02.075 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:02.075 POWER: Cannot set governor of lcore 0 to userspace 00:06:02.075 POWER: Attempting to initialise VM power management... 00:06:02.075 GUEST_CHANNEL: Unable to to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:02.075 POWER: Unable to set Power Management Environment for lcore 0 00:06:02.075 [2024-07-13 07:52:07.637882] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:06:02.075 [2024-07-13 07:52:07.638248] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:06:02.075 [2024-07-13 07:52:07.638620] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:06:02.075 [2024-07-13 07:52:07.638649] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:02.075 [2024-07-13 07:52:07.638660] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:02.075 [2024-07-13 07:52:07.638671] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:02.075 07:52:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.075 07:52:07 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:02.075 07:52:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.075 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 [2024-07-13 07:52:07.696475] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:02.075 07:52:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.075 07:52:07 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:02.075 07:52:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:02.075 07:52:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:02.075 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 ************************************ 00:06:02.075 START TEST scheduler_create_thread 00:06:02.075 ************************************ 00:06:02.075 07:52:07 -- common/autotest_common.sh@1104 -- # scheduler_create_thread 00:06:02.075 07:52:07 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:02.075 07:52:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.075 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 2 00:06:02.075 07:52:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.075 07:52:07 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:02.075 07:52:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.075 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 3 00:06:02.075 07:52:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.075 07:52:07 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:02.075 07:52:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.075 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 4 00:06:02.075 07:52:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.075 07:52:07 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:02.075 07:52:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.075 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 5 00:06:02.075 07:52:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.075 07:52:07 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:02.075 07:52:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.075 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 6 00:06:02.075 07:52:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.075 07:52:07 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:02.075 07:52:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.075 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 7 00:06:02.075 07:52:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.075 07:52:07 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:02.075 07:52:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.075 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 8 00:06:02.075 07:52:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.075 07:52:07 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:02.075 07:52:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.075 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 9 00:06:02.075 07:52:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.075 07:52:07 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:02.075 07:52:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.075 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 10 00:06:02.075 07:52:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.075 07:52:07 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:02.075 07:52:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.075 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 07:52:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.075 07:52:07 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:02.075 07:52:07 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:02.075 07:52:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.075 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:02.075 07:52:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:02.075 07:52:07 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:02.075 07:52:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:02.075 07:52:07 -- common/autotest_common.sh@10 -- # set +x 00:06:03.449 07:52:09 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:03.449 07:52:09 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:03.449 07:52:09 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:03.449 07:52:09 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:03.449 07:52:09 -- common/autotest_common.sh@10 -- # set +x 00:06:04.821 ************************************ 00:06:04.821 END TEST scheduler_create_thread 00:06:04.821 ************************************ 00:06:04.821 07:52:10 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:04.821 00:06:04.821 real 0m2.612s 00:06:04.821 user 0m0.018s 00:06:04.821 sys 0m0.007s 00:06:04.821 07:52:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.821 07:52:10 -- common/autotest_common.sh@10 -- # set +x 00:06:04.821 07:52:10 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:04.821 07:52:10 -- scheduler/scheduler.sh@46 -- # killprocess 66422 00:06:04.821 07:52:10 -- common/autotest_common.sh@926 -- # '[' -z 66422 ']' 00:06:04.821 07:52:10 -- common/autotest_common.sh@930 -- # kill -0 66422 00:06:04.821 07:52:10 -- common/autotest_common.sh@931 -- # uname 00:06:04.821 07:52:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:04.821 07:52:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66422 00:06:04.821 killing process with pid 66422 00:06:04.821 07:52:10 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:04.821 07:52:10 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:04.821 07:52:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66422' 00:06:04.821 07:52:10 -- common/autotest_common.sh@945 -- # kill 66422 00:06:04.821 07:52:10 -- common/autotest_common.sh@950 -- # wait 66422 00:06:05.080 [2024-07-13 07:52:10.800412] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:05.338 00:06:05.338 real 0m3.682s 00:06:05.338 user 0m5.467s 00:06:05.338 sys 0m0.294s 00:06:05.338 07:52:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.338 ************************************ 00:06:05.338 END TEST event_scheduler 00:06:05.338 ************************************ 00:06:05.338 07:52:10 -- common/autotest_common.sh@10 -- # set +x 00:06:05.338 07:52:10 -- event/event.sh@51 -- # modprobe -n nbd 00:06:05.338 07:52:10 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:05.338 07:52:10 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:05.338 07:52:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:05.338 07:52:10 -- common/autotest_common.sh@10 -- # set +x 00:06:05.338 ************************************ 00:06:05.338 START TEST app_repeat 00:06:05.338 ************************************ 00:06:05.338 07:52:10 -- common/autotest_common.sh@1104 -- # app_repeat_test 00:06:05.338 07:52:10 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.338 07:52:10 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.338 07:52:10 -- event/event.sh@13 -- # local nbd_list 00:06:05.338 07:52:10 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.338 07:52:10 -- event/event.sh@14 -- # local bdev_list 00:06:05.338 07:52:10 -- event/event.sh@15 -- # local repeat_times=4 00:06:05.338 07:52:10 -- event/event.sh@17 -- # modprobe nbd 00:06:05.338 Process app_repeat pid: 66514 00:06:05.338 07:52:11 -- event/event.sh@19 -- # repeat_pid=66514 00:06:05.338 07:52:11 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:05.338 07:52:11 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 66514' 00:06:05.338 07:52:11 -- event/event.sh@23 -- # for i in {0..2} 00:06:05.338 07:52:11 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:05.338 07:52:11 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:05.338 spdk_app_start Round 0 00:06:05.338 07:52:11 -- event/event.sh@25 -- # waitforlisten 66514 /var/tmp/spdk-nbd.sock 00:06:05.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.338 07:52:11 -- common/autotest_common.sh@819 -- # '[' -z 66514 ']' 00:06:05.338 07:52:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.338 07:52:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:05.338 07:52:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.338 07:52:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:05.338 07:52:11 -- common/autotest_common.sh@10 -- # set +x 00:06:05.338 [2024-07-13 07:52:11.023017] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:05.338 [2024-07-13 07:52:11.023101] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66514 ] 00:06:05.338 [2024-07-13 07:52:11.148854] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.595 [2024-07-13 07:52:11.184306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.595 [2024-07-13 07:52:11.184315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.595 07:52:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:05.595 07:52:11 -- common/autotest_common.sh@852 -- # return 0 00:06:05.595 07:52:11 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.854 Malloc0 00:06:05.854 07:52:11 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.113 Malloc1 00:06:06.113 07:52:11 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.113 07:52:11 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.113 07:52:11 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.113 07:52:11 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.113 07:52:11 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.113 07:52:11 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.113 07:52:11 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.113 07:52:11 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.113 07:52:11 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.113 07:52:11 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.113 07:52:11 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.113 07:52:11 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.113 07:52:11 -- bdev/nbd_common.sh@12 -- # local i 00:06:06.113 07:52:11 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.113 07:52:11 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.113 07:52:11 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.372 /dev/nbd0 00:06:06.372 07:52:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.372 07:52:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.372 07:52:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:06.372 07:52:12 -- common/autotest_common.sh@857 -- # local i 00:06:06.372 07:52:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:06.372 07:52:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:06.372 07:52:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:06.372 07:52:12 -- common/autotest_common.sh@861 -- # break 00:06:06.372 07:52:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:06.372 07:52:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:06.372 07:52:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.372 1+0 records in 00:06:06.372 1+0 records out 00:06:06.372 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328325 s, 12.5 MB/s 00:06:06.372 07:52:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.372 07:52:12 -- common/autotest_common.sh@874 -- # size=4096 00:06:06.372 07:52:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.372 07:52:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:06.372 07:52:12 -- common/autotest_common.sh@877 -- # return 0 00:06:06.372 07:52:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.372 07:52:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.372 07:52:12 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.631 /dev/nbd1 00:06:06.631 07:52:12 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.631 07:52:12 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.631 07:52:12 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:06.631 07:52:12 -- common/autotest_common.sh@857 -- # local i 00:06:06.631 07:52:12 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:06.631 07:52:12 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:06.631 07:52:12 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:06.631 07:52:12 -- common/autotest_common.sh@861 -- # break 00:06:06.631 07:52:12 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:06.631 07:52:12 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:06.631 07:52:12 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.631 1+0 records in 00:06:06.631 1+0 records out 00:06:06.631 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000453404 s, 9.0 MB/s 00:06:06.631 07:52:12 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.631 07:52:12 -- common/autotest_common.sh@874 -- # size=4096 00:06:06.631 07:52:12 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.631 07:52:12 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:06.631 07:52:12 -- common/autotest_common.sh@877 -- # return 0 00:06:06.631 07:52:12 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.631 07:52:12 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.631 07:52:12 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.631 07:52:12 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.631 07:52:12 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:06.888 { 00:06:06.888 "nbd_device": "/dev/nbd0", 00:06:06.888 "bdev_name": "Malloc0" 00:06:06.888 }, 00:06:06.888 { 00:06:06.888 "nbd_device": "/dev/nbd1", 00:06:06.888 "bdev_name": "Malloc1" 00:06:06.888 } 00:06:06.888 ]' 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:06.888 { 00:06:06.888 "nbd_device": "/dev/nbd0", 00:06:06.888 "bdev_name": "Malloc0" 00:06:06.888 }, 00:06:06.888 { 00:06:06.888 "nbd_device": "/dev/nbd1", 00:06:06.888 "bdev_name": "Malloc1" 00:06:06.888 } 00:06:06.888 ]' 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.888 /dev/nbd1' 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.888 /dev/nbd1' 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.888 256+0 records in 00:06:06.888 256+0 records out 00:06:06.888 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00416589 s, 252 MB/s 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.888 07:52:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.147 256+0 records in 00:06:07.147 256+0 records out 00:06:07.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249654 s, 42.0 MB/s 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.147 256+0 records in 00:06:07.147 256+0 records out 00:06:07.147 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0267472 s, 39.2 MB/s 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@51 -- # local i 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.147 07:52:12 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.406 07:52:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.406 07:52:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.406 07:52:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.406 07:52:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.406 07:52:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.406 07:52:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.406 07:52:13 -- bdev/nbd_common.sh@41 -- # break 00:06:07.406 07:52:13 -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.406 07:52:13 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.406 07:52:13 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.406 07:52:13 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.663 07:52:13 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.663 07:52:13 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.663 07:52:13 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.663 07:52:13 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.663 07:52:13 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.663 07:52:13 -- bdev/nbd_common.sh@41 -- # break 00:06:07.663 07:52:13 -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.663 07:52:13 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.663 07:52:13 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.663 07:52:13 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.663 07:52:13 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:07.663 07:52:13 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.663 07:52:13 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:07.921 07:52:13 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:07.921 07:52:13 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:07.921 07:52:13 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.921 07:52:13 -- bdev/nbd_common.sh@65 -- # true 00:06:07.921 07:52:13 -- bdev/nbd_common.sh@65 -- # count=0 00:06:07.921 07:52:13 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:07.921 07:52:13 -- bdev/nbd_common.sh@104 -- # count=0 00:06:07.921 07:52:13 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:07.921 07:52:13 -- bdev/nbd_common.sh@109 -- # return 0 00:06:07.921 07:52:13 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.921 07:52:13 -- event/event.sh@35 -- # sleep 3 00:06:08.179 [2024-07-13 07:52:13.809348] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:08.179 [2024-07-13 07:52:13.839573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:08.179 [2024-07-13 07:52:13.839584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.179 [2024-07-13 07:52:13.867623] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:08.179 [2024-07-13 07:52:13.867674] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.467 spdk_app_start Round 1 00:06:11.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.467 07:52:16 -- event/event.sh@23 -- # for i in {0..2} 00:06:11.467 07:52:16 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:11.467 07:52:16 -- event/event.sh@25 -- # waitforlisten 66514 /var/tmp/spdk-nbd.sock 00:06:11.467 07:52:16 -- common/autotest_common.sh@819 -- # '[' -z 66514 ']' 00:06:11.467 07:52:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.467 07:52:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:11.467 07:52:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.467 07:52:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:11.467 07:52:16 -- common/autotest_common.sh@10 -- # set +x 00:06:11.467 07:52:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:11.467 07:52:16 -- common/autotest_common.sh@852 -- # return 0 00:06:11.467 07:52:16 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.467 Malloc0 00:06:11.467 07:52:17 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.726 Malloc1 00:06:11.726 07:52:17 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.726 07:52:17 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.726 07:52:17 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.726 07:52:17 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@12 -- # local i 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:11.727 /dev/nbd0 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:11.727 07:52:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:11.727 07:52:17 -- common/autotest_common.sh@857 -- # local i 00:06:11.727 07:52:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:11.727 07:52:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:11.727 07:52:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:11.727 07:52:17 -- common/autotest_common.sh@861 -- # break 00:06:11.727 07:52:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:11.727 07:52:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:11.727 07:52:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:11.727 1+0 records in 00:06:11.727 1+0 records out 00:06:11.727 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000386211 s, 10.6 MB/s 00:06:11.727 07:52:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.727 07:52:17 -- common/autotest_common.sh@874 -- # size=4096 00:06:11.727 07:52:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:11.727 07:52:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:11.727 07:52:17 -- common/autotest_common.sh@877 -- # return 0 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:11.727 07:52:17 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:11.985 /dev/nbd1 00:06:12.249 07:52:17 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.249 07:52:17 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.249 07:52:17 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:12.249 07:52:17 -- common/autotest_common.sh@857 -- # local i 00:06:12.249 07:52:17 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:12.249 07:52:17 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:12.249 07:52:17 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:12.249 07:52:17 -- common/autotest_common.sh@861 -- # break 00:06:12.249 07:52:17 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:12.249 07:52:17 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:12.249 07:52:17 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.249 1+0 records in 00:06:12.249 1+0 records out 00:06:12.249 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000590877 s, 6.9 MB/s 00:06:12.249 07:52:17 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.249 07:52:17 -- common/autotest_common.sh@874 -- # size=4096 00:06:12.249 07:52:17 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.249 07:52:17 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:12.249 07:52:17 -- common/autotest_common.sh@877 -- # return 0 00:06:12.249 07:52:17 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.249 07:52:17 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.249 07:52:17 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.249 07:52:17 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.249 07:52:17 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.560 { 00:06:12.560 "nbd_device": "/dev/nbd0", 00:06:12.560 "bdev_name": "Malloc0" 00:06:12.560 }, 00:06:12.560 { 00:06:12.560 "nbd_device": "/dev/nbd1", 00:06:12.560 "bdev_name": "Malloc1" 00:06:12.560 } 00:06:12.560 ]' 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.560 { 00:06:12.560 "nbd_device": "/dev/nbd0", 00:06:12.560 "bdev_name": "Malloc0" 00:06:12.560 }, 00:06:12.560 { 00:06:12.560 "nbd_device": "/dev/nbd1", 00:06:12.560 "bdev_name": "Malloc1" 00:06:12.560 } 00:06:12.560 ]' 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.560 /dev/nbd1' 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.560 /dev/nbd1' 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.560 256+0 records in 00:06:12.560 256+0 records out 00:06:12.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0099173 s, 106 MB/s 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.560 256+0 records in 00:06:12.560 256+0 records out 00:06:12.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0227845 s, 46.0 MB/s 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.560 256+0 records in 00:06:12.560 256+0 records out 00:06:12.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0243967 s, 43.0 MB/s 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@51 -- # local i 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.560 07:52:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:12.818 07:52:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:12.818 07:52:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:12.818 07:52:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:12.818 07:52:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:12.818 07:52:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:12.818 07:52:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:12.818 07:52:18 -- bdev/nbd_common.sh@41 -- # break 00:06:12.818 07:52:18 -- bdev/nbd_common.sh@45 -- # return 0 00:06:12.818 07:52:18 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.818 07:52:18 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.076 07:52:18 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.076 07:52:18 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.076 07:52:18 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.076 07:52:18 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.076 07:52:18 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.076 07:52:18 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.076 07:52:18 -- bdev/nbd_common.sh@41 -- # break 00:06:13.076 07:52:18 -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.076 07:52:18 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.076 07:52:18 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.076 07:52:18 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.335 07:52:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.335 07:52:19 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.335 07:52:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.335 07:52:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.335 07:52:19 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.335 07:52:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.335 07:52:19 -- bdev/nbd_common.sh@65 -- # true 00:06:13.335 07:52:19 -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.335 07:52:19 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.335 07:52:19 -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.335 07:52:19 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.335 07:52:19 -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.335 07:52:19 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:13.594 07:52:19 -- event/event.sh@35 -- # sleep 3 00:06:13.852 [2024-07-13 07:52:19.463471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:13.852 [2024-07-13 07:52:19.497120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:13.852 [2024-07-13 07:52:19.497132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.852 [2024-07-13 07:52:19.526846] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:13.852 [2024-07-13 07:52:19.526961] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.147 spdk_app_start Round 2 00:06:17.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.147 07:52:22 -- event/event.sh@23 -- # for i in {0..2} 00:06:17.147 07:52:22 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:17.147 07:52:22 -- event/event.sh@25 -- # waitforlisten 66514 /var/tmp/spdk-nbd.sock 00:06:17.147 07:52:22 -- common/autotest_common.sh@819 -- # '[' -z 66514 ']' 00:06:17.147 07:52:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.147 07:52:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:17.147 07:52:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.147 07:52:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:17.147 07:52:22 -- common/autotest_common.sh@10 -- # set +x 00:06:17.147 07:52:22 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:17.147 07:52:22 -- common/autotest_common.sh@852 -- # return 0 00:06:17.147 07:52:22 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.147 Malloc0 00:06:17.147 07:52:22 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:17.406 Malloc1 00:06:17.406 07:52:23 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.406 07:52:23 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.406 07:52:23 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.406 07:52:23 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:17.406 07:52:23 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.406 07:52:23 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:17.406 07:52:23 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:17.406 07:52:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.406 07:52:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:17.406 07:52:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:17.406 07:52:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.406 07:52:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:17.406 07:52:23 -- bdev/nbd_common.sh@12 -- # local i 00:06:17.406 07:52:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:17.406 07:52:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.406 07:52:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:17.664 /dev/nbd0 00:06:17.664 07:52:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:17.664 07:52:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:17.664 07:52:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd0 00:06:17.664 07:52:23 -- common/autotest_common.sh@857 -- # local i 00:06:17.664 07:52:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:17.664 07:52:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:17.664 07:52:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd0 /proc/partitions 00:06:17.664 07:52:23 -- common/autotest_common.sh@861 -- # break 00:06:17.664 07:52:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:17.664 07:52:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:17.664 07:52:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.664 1+0 records in 00:06:17.664 1+0 records out 00:06:17.664 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000414041 s, 9.9 MB/s 00:06:17.664 07:52:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.664 07:52:23 -- common/autotest_common.sh@874 -- # size=4096 00:06:17.664 07:52:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.664 07:52:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:17.664 07:52:23 -- common/autotest_common.sh@877 -- # return 0 00:06:17.664 07:52:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.664 07:52:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.664 07:52:23 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:17.922 /dev/nbd1 00:06:17.922 07:52:23 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:17.922 07:52:23 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:17.922 07:52:23 -- common/autotest_common.sh@856 -- # local nbd_name=nbd1 00:06:17.922 07:52:23 -- common/autotest_common.sh@857 -- # local i 00:06:17.922 07:52:23 -- common/autotest_common.sh@859 -- # (( i = 1 )) 00:06:17.922 07:52:23 -- common/autotest_common.sh@859 -- # (( i <= 20 )) 00:06:17.922 07:52:23 -- common/autotest_common.sh@860 -- # grep -q -w nbd1 /proc/partitions 00:06:17.922 07:52:23 -- common/autotest_common.sh@861 -- # break 00:06:17.922 07:52:23 -- common/autotest_common.sh@872 -- # (( i = 1 )) 00:06:17.922 07:52:23 -- common/autotest_common.sh@872 -- # (( i <= 20 )) 00:06:17.922 07:52:23 -- common/autotest_common.sh@873 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:17.922 1+0 records in 00:06:17.922 1+0 records out 00:06:17.922 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000358443 s, 11.4 MB/s 00:06:17.922 07:52:23 -- common/autotest_common.sh@874 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.922 07:52:23 -- common/autotest_common.sh@874 -- # size=4096 00:06:17.922 07:52:23 -- common/autotest_common.sh@875 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:17.922 07:52:23 -- common/autotest_common.sh@876 -- # '[' 4096 '!=' 0 ']' 00:06:17.922 07:52:23 -- common/autotest_common.sh@877 -- # return 0 00:06:17.922 07:52:23 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:17.922 07:52:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:17.922 07:52:23 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.922 07:52:23 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.922 07:52:23 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:18.180 { 00:06:18.180 "nbd_device": "/dev/nbd0", 00:06:18.180 "bdev_name": "Malloc0" 00:06:18.180 }, 00:06:18.180 { 00:06:18.180 "nbd_device": "/dev/nbd1", 00:06:18.180 "bdev_name": "Malloc1" 00:06:18.180 } 00:06:18.180 ]' 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:18.180 { 00:06:18.180 "nbd_device": "/dev/nbd0", 00:06:18.180 "bdev_name": "Malloc0" 00:06:18.180 }, 00:06:18.180 { 00:06:18.180 "nbd_device": "/dev/nbd1", 00:06:18.180 "bdev_name": "Malloc1" 00:06:18.180 } 00:06:18.180 ]' 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:18.180 /dev/nbd1' 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:18.180 /dev/nbd1' 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@65 -- # count=2 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@95 -- # count=2 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:18.180 256+0 records in 00:06:18.180 256+0 records out 00:06:18.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00706298 s, 148 MB/s 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:18.180 256+0 records in 00:06:18.180 256+0 records out 00:06:18.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220832 s, 47.5 MB/s 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:18.180 256+0 records in 00:06:18.180 256+0 records out 00:06:18.180 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0251949 s, 41.6 MB/s 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@51 -- # local i 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.180 07:52:23 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:18.747 07:52:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:18.747 07:52:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:18.747 07:52:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:18.747 07:52:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:18.747 07:52:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:18.747 07:52:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:18.747 07:52:24 -- bdev/nbd_common.sh@41 -- # break 00:06:18.747 07:52:24 -- bdev/nbd_common.sh@45 -- # return 0 00:06:18.747 07:52:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:18.747 07:52:24 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:19.006 07:52:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:19.006 07:52:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:19.006 07:52:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:19.006 07:52:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:19.006 07:52:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:19.006 07:52:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:19.006 07:52:24 -- bdev/nbd_common.sh@41 -- # break 00:06:19.006 07:52:24 -- bdev/nbd_common.sh@45 -- # return 0 00:06:19.006 07:52:24 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.006 07:52:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.006 07:52:24 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.266 07:52:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:19.266 07:52:24 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:19.266 07:52:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.266 07:52:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:19.266 07:52:24 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:19.266 07:52:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.266 07:52:24 -- bdev/nbd_common.sh@65 -- # true 00:06:19.266 07:52:24 -- bdev/nbd_common.sh@65 -- # count=0 00:06:19.266 07:52:24 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:19.266 07:52:24 -- bdev/nbd_common.sh@104 -- # count=0 00:06:19.266 07:52:24 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:19.266 07:52:24 -- bdev/nbd_common.sh@109 -- # return 0 00:06:19.266 07:52:24 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:19.525 07:52:25 -- event/event.sh@35 -- # sleep 3 00:06:19.525 [2024-07-13 07:52:25.285037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.525 [2024-07-13 07:52:25.316220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.525 [2024-07-13 07:52:25.316226] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.784 [2024-07-13 07:52:25.347412] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.784 [2024-07-13 07:52:25.347456] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.072 07:52:28 -- event/event.sh@38 -- # waitforlisten 66514 /var/tmp/spdk-nbd.sock 00:06:23.072 07:52:28 -- common/autotest_common.sh@819 -- # '[' -z 66514 ']' 00:06:23.072 07:52:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.072 07:52:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.072 07:52:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.072 07:52:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.072 07:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.072 07:52:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.072 07:52:28 -- common/autotest_common.sh@852 -- # return 0 00:06:23.072 07:52:28 -- event/event.sh@39 -- # killprocess 66514 00:06:23.072 07:52:28 -- common/autotest_common.sh@926 -- # '[' -z 66514 ']' 00:06:23.072 07:52:28 -- common/autotest_common.sh@930 -- # kill -0 66514 00:06:23.072 07:52:28 -- common/autotest_common.sh@931 -- # uname 00:06:23.072 07:52:28 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:23.072 07:52:28 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66514 00:06:23.072 killing process with pid 66514 00:06:23.072 07:52:28 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:23.072 07:52:28 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:23.072 07:52:28 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66514' 00:06:23.072 07:52:28 -- common/autotest_common.sh@945 -- # kill 66514 00:06:23.072 07:52:28 -- common/autotest_common.sh@950 -- # wait 66514 00:06:23.072 spdk_app_start is called in Round 0. 00:06:23.072 Shutdown signal received, stop current app iteration 00:06:23.072 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:06:23.072 spdk_app_start is called in Round 1. 00:06:23.072 Shutdown signal received, stop current app iteration 00:06:23.072 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:06:23.072 spdk_app_start is called in Round 2. 00:06:23.072 Shutdown signal received, stop current app iteration 00:06:23.072 Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 reinitialization... 00:06:23.072 spdk_app_start is called in Round 3. 00:06:23.072 Shutdown signal received, stop current app iteration 00:06:23.072 ************************************ 00:06:23.072 END TEST app_repeat 00:06:23.072 ************************************ 00:06:23.072 07:52:28 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:23.072 07:52:28 -- event/event.sh@42 -- # return 0 00:06:23.072 00:06:23.072 real 0m17.582s 00:06:23.072 user 0m39.884s 00:06:23.072 sys 0m2.422s 00:06:23.072 07:52:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.072 07:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.072 07:52:28 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:23.072 07:52:28 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:23.072 07:52:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.072 07:52:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.072 07:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.072 ************************************ 00:06:23.072 START TEST cpu_locks 00:06:23.072 ************************************ 00:06:23.072 07:52:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:23.072 * Looking for test storage... 00:06:23.072 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:23.072 07:52:28 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:23.072 07:52:28 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:23.072 07:52:28 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:23.072 07:52:28 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:23.072 07:52:28 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:23.072 07:52:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:23.072 07:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.072 ************************************ 00:06:23.072 START TEST default_locks 00:06:23.072 ************************************ 00:06:23.072 07:52:28 -- common/autotest_common.sh@1104 -- # default_locks 00:06:23.072 07:52:28 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=66927 00:06:23.072 07:52:28 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.072 07:52:28 -- event/cpu_locks.sh@47 -- # waitforlisten 66927 00:06:23.072 07:52:28 -- common/autotest_common.sh@819 -- # '[' -z 66927 ']' 00:06:23.072 07:52:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.072 07:52:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:23.072 07:52:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.072 07:52:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:23.072 07:52:28 -- common/autotest_common.sh@10 -- # set +x 00:06:23.072 [2024-07-13 07:52:28.768661] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:23.072 [2024-07-13 07:52:28.768755] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66927 ] 00:06:23.331 [2024-07-13 07:52:28.899149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.331 [2024-07-13 07:52:28.934124] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:23.331 [2024-07-13 07:52:28.934524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.897 07:52:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:23.897 07:52:29 -- common/autotest_common.sh@852 -- # return 0 00:06:23.897 07:52:29 -- event/cpu_locks.sh@49 -- # locks_exist 66927 00:06:23.897 07:52:29 -- event/cpu_locks.sh@22 -- # lslocks -p 66927 00:06:23.897 07:52:29 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.464 07:52:30 -- event/cpu_locks.sh@50 -- # killprocess 66927 00:06:24.464 07:52:30 -- common/autotest_common.sh@926 -- # '[' -z 66927 ']' 00:06:24.464 07:52:30 -- common/autotest_common.sh@930 -- # kill -0 66927 00:06:24.464 07:52:30 -- common/autotest_common.sh@931 -- # uname 00:06:24.464 07:52:30 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:24.464 07:52:30 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66927 00:06:24.464 killing process with pid 66927 00:06:24.464 07:52:30 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:24.464 07:52:30 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:24.464 07:52:30 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66927' 00:06:24.464 07:52:30 -- common/autotest_common.sh@945 -- # kill 66927 00:06:24.464 07:52:30 -- common/autotest_common.sh@950 -- # wait 66927 00:06:24.723 07:52:30 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 66927 00:06:24.723 07:52:30 -- common/autotest_common.sh@640 -- # local es=0 00:06:24.723 07:52:30 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 66927 00:06:24.723 07:52:30 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:24.723 07:52:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.723 07:52:30 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:24.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.723 07:52:30 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:24.723 07:52:30 -- common/autotest_common.sh@643 -- # waitforlisten 66927 00:06:24.723 07:52:30 -- common/autotest_common.sh@819 -- # '[' -z 66927 ']' 00:06:24.723 07:52:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.723 07:52:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.723 07:52:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.723 07:52:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.723 ERROR: process (pid: 66927) is no longer running 00:06:24.723 07:52:30 -- common/autotest_common.sh@10 -- # set +x 00:06:24.723 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (66927) - No such process 00:06:24.723 07:52:30 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:24.723 07:52:30 -- common/autotest_common.sh@852 -- # return 1 00:06:24.723 07:52:30 -- common/autotest_common.sh@643 -- # es=1 00:06:24.723 07:52:30 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:24.723 07:52:30 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:24.723 07:52:30 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:24.723 07:52:30 -- event/cpu_locks.sh@54 -- # no_locks 00:06:24.723 07:52:30 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:24.723 07:52:30 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:24.723 07:52:30 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:24.723 00:06:24.723 real 0m1.667s 00:06:24.723 user 0m1.885s 00:06:24.723 sys 0m0.436s 00:06:24.723 07:52:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.723 07:52:30 -- common/autotest_common.sh@10 -- # set +x 00:06:24.723 ************************************ 00:06:24.723 END TEST default_locks 00:06:24.723 ************************************ 00:06:24.723 07:52:30 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:24.723 07:52:30 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:24.723 07:52:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:24.723 07:52:30 -- common/autotest_common.sh@10 -- # set +x 00:06:24.723 ************************************ 00:06:24.723 START TEST default_locks_via_rpc 00:06:24.723 ************************************ 00:06:24.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:24.723 07:52:30 -- common/autotest_common.sh@1104 -- # default_locks_via_rpc 00:06:24.723 07:52:30 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=66974 00:06:24.723 07:52:30 -- event/cpu_locks.sh@63 -- # waitforlisten 66974 00:06:24.723 07:52:30 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:24.723 07:52:30 -- common/autotest_common.sh@819 -- # '[' -z 66974 ']' 00:06:24.723 07:52:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:24.723 07:52:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:24.723 07:52:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:24.723 07:52:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:24.723 07:52:30 -- common/autotest_common.sh@10 -- # set +x 00:06:24.723 [2024-07-13 07:52:30.492713] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:24.723 [2024-07-13 07:52:30.492810] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66974 ] 00:06:24.982 [2024-07-13 07:52:30.622644] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.982 [2024-07-13 07:52:30.656791] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:24.982 [2024-07-13 07:52:30.657018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.917 07:52:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:25.917 07:52:31 -- common/autotest_common.sh@852 -- # return 0 00:06:25.917 07:52:31 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:25.917 07:52:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:25.917 07:52:31 -- common/autotest_common.sh@10 -- # set +x 00:06:25.917 07:52:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:25.917 07:52:31 -- event/cpu_locks.sh@67 -- # no_locks 00:06:25.917 07:52:31 -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:25.917 07:52:31 -- event/cpu_locks.sh@26 -- # local lock_files 00:06:25.917 07:52:31 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:25.917 07:52:31 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:25.917 07:52:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:25.917 07:52:31 -- common/autotest_common.sh@10 -- # set +x 00:06:25.917 07:52:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:25.917 07:52:31 -- event/cpu_locks.sh@71 -- # locks_exist 66974 00:06:25.917 07:52:31 -- event/cpu_locks.sh@22 -- # lslocks -p 66974 00:06:25.917 07:52:31 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:26.176 07:52:31 -- event/cpu_locks.sh@73 -- # killprocess 66974 00:06:26.176 07:52:31 -- common/autotest_common.sh@926 -- # '[' -z 66974 ']' 00:06:26.176 07:52:31 -- common/autotest_common.sh@930 -- # kill -0 66974 00:06:26.176 07:52:31 -- common/autotest_common.sh@931 -- # uname 00:06:26.176 07:52:31 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:26.176 07:52:31 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 66974 00:06:26.176 killing process with pid 66974 00:06:26.176 07:52:31 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:26.176 07:52:31 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:26.176 07:52:31 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 66974' 00:06:26.176 07:52:31 -- common/autotest_common.sh@945 -- # kill 66974 00:06:26.176 07:52:31 -- common/autotest_common.sh@950 -- # wait 66974 00:06:26.434 00:06:26.434 real 0m1.645s 00:06:26.434 user 0m1.872s 00:06:26.434 sys 0m0.411s 00:06:26.434 07:52:32 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.434 ************************************ 00:06:26.434 END TEST default_locks_via_rpc 00:06:26.434 ************************************ 00:06:26.434 07:52:32 -- common/autotest_common.sh@10 -- # set +x 00:06:26.434 07:52:32 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:26.434 07:52:32 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:26.434 07:52:32 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:26.434 07:52:32 -- common/autotest_common.sh@10 -- # set +x 00:06:26.434 ************************************ 00:06:26.434 START TEST non_locking_app_on_locked_coremask 00:06:26.434 ************************************ 00:06:26.434 07:52:32 -- common/autotest_common.sh@1104 -- # non_locking_app_on_locked_coremask 00:06:26.434 07:52:32 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=67025 00:06:26.434 07:52:32 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:26.434 07:52:32 -- event/cpu_locks.sh@81 -- # waitforlisten 67025 /var/tmp/spdk.sock 00:06:26.434 07:52:32 -- common/autotest_common.sh@819 -- # '[' -z 67025 ']' 00:06:26.434 07:52:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.434 07:52:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:26.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.434 07:52:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.434 07:52:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:26.434 07:52:32 -- common/autotest_common.sh@10 -- # set +x 00:06:26.434 [2024-07-13 07:52:32.202718] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:26.434 [2024-07-13 07:52:32.202881] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67025 ] 00:06:26.693 [2024-07-13 07:52:32.338113] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.693 [2024-07-13 07:52:32.372227] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:26.693 [2024-07-13 07:52:32.372412] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.299 07:52:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:27.299 07:52:33 -- common/autotest_common.sh@852 -- # return 0 00:06:27.299 07:52:33 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:27.299 07:52:33 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=67041 00:06:27.299 07:52:33 -- event/cpu_locks.sh@85 -- # waitforlisten 67041 /var/tmp/spdk2.sock 00:06:27.299 07:52:33 -- common/autotest_common.sh@819 -- # '[' -z 67041 ']' 00:06:27.299 07:52:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:27.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:27.299 07:52:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:27.299 07:52:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:27.299 07:52:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:27.299 07:52:33 -- common/autotest_common.sh@10 -- # set +x 00:06:27.556 [2024-07-13 07:52:33.141460] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:27.556 [2024-07-13 07:52:33.141557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67041 ] 00:06:27.556 [2024-07-13 07:52:33.276987] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:27.556 [2024-07-13 07:52:33.277024] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.556 [2024-07-13 07:52:33.341375] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:27.556 [2024-07-13 07:52:33.341543] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.493 07:52:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:28.493 07:52:34 -- common/autotest_common.sh@852 -- # return 0 00:06:28.493 07:52:34 -- event/cpu_locks.sh@87 -- # locks_exist 67025 00:06:28.493 07:52:34 -- event/cpu_locks.sh@22 -- # lslocks -p 67025 00:06:28.493 07:52:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:29.061 07:52:34 -- event/cpu_locks.sh@89 -- # killprocess 67025 00:06:29.320 07:52:34 -- common/autotest_common.sh@926 -- # '[' -z 67025 ']' 00:06:29.320 07:52:34 -- common/autotest_common.sh@930 -- # kill -0 67025 00:06:29.320 07:52:34 -- common/autotest_common.sh@931 -- # uname 00:06:29.320 07:52:34 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:29.320 07:52:34 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67025 00:06:29.320 07:52:34 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:29.320 killing process with pid 67025 00:06:29.320 07:52:34 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:29.320 07:52:34 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67025' 00:06:29.320 07:52:34 -- common/autotest_common.sh@945 -- # kill 67025 00:06:29.320 07:52:34 -- common/autotest_common.sh@950 -- # wait 67025 00:06:29.580 07:52:35 -- event/cpu_locks.sh@90 -- # killprocess 67041 00:06:29.580 07:52:35 -- common/autotest_common.sh@926 -- # '[' -z 67041 ']' 00:06:29.580 07:52:35 -- common/autotest_common.sh@930 -- # kill -0 67041 00:06:29.580 07:52:35 -- common/autotest_common.sh@931 -- # uname 00:06:29.580 07:52:35 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:29.580 07:52:35 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67041 00:06:29.580 07:52:35 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:29.580 killing process with pid 67041 00:06:29.580 07:52:35 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:29.580 07:52:35 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67041' 00:06:29.580 07:52:35 -- common/autotest_common.sh@945 -- # kill 67041 00:06:29.580 07:52:35 -- common/autotest_common.sh@950 -- # wait 67041 00:06:29.839 00:06:29.839 real 0m3.470s 00:06:29.839 user 0m4.029s 00:06:29.839 sys 0m0.856s 00:06:29.839 07:52:35 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.839 07:52:35 -- common/autotest_common.sh@10 -- # set +x 00:06:29.839 ************************************ 00:06:29.839 END TEST non_locking_app_on_locked_coremask 00:06:29.839 ************************************ 00:06:29.839 07:52:35 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:29.839 07:52:35 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:30.098 07:52:35 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:30.098 07:52:35 -- common/autotest_common.sh@10 -- # set +x 00:06:30.098 ************************************ 00:06:30.098 START TEST locking_app_on_unlocked_coremask 00:06:30.098 ************************************ 00:06:30.098 07:52:35 -- common/autotest_common.sh@1104 -- # locking_app_on_unlocked_coremask 00:06:30.098 07:52:35 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=67097 00:06:30.098 07:52:35 -- event/cpu_locks.sh@99 -- # waitforlisten 67097 /var/tmp/spdk.sock 00:06:30.098 07:52:35 -- common/autotest_common.sh@819 -- # '[' -z 67097 ']' 00:06:30.098 07:52:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.098 07:52:35 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:30.098 07:52:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:30.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.098 07:52:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.098 07:52:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:30.098 07:52:35 -- common/autotest_common.sh@10 -- # set +x 00:06:30.098 [2024-07-13 07:52:35.719098] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:30.098 [2024-07-13 07:52:35.719216] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67097 ] 00:06:30.098 [2024-07-13 07:52:35.850121] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:30.098 [2024-07-13 07:52:35.850188] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.098 [2024-07-13 07:52:35.884076] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:30.098 [2024-07-13 07:52:35.884219] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.034 07:52:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.034 07:52:36 -- common/autotest_common.sh@852 -- # return 0 00:06:31.034 07:52:36 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:31.034 07:52:36 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=67113 00:06:31.034 07:52:36 -- event/cpu_locks.sh@103 -- # waitforlisten 67113 /var/tmp/spdk2.sock 00:06:31.034 07:52:36 -- common/autotest_common.sh@819 -- # '[' -z 67113 ']' 00:06:31.034 07:52:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.034 07:52:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:31.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.034 07:52:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.034 07:52:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:31.034 07:52:36 -- common/autotest_common.sh@10 -- # set +x 00:06:31.034 [2024-07-13 07:52:36.703504] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:31.034 [2024-07-13 07:52:36.703581] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67113 ] 00:06:31.034 [2024-07-13 07:52:36.837951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.293 [2024-07-13 07:52:36.902308] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:31.293 [2024-07-13 07:52:36.902449] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.859 07:52:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:31.859 07:52:37 -- common/autotest_common.sh@852 -- # return 0 00:06:31.859 07:52:37 -- event/cpu_locks.sh@105 -- # locks_exist 67113 00:06:31.859 07:52:37 -- event/cpu_locks.sh@22 -- # lslocks -p 67113 00:06:31.859 07:52:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:32.426 07:52:38 -- event/cpu_locks.sh@107 -- # killprocess 67097 00:06:32.426 07:52:38 -- common/autotest_common.sh@926 -- # '[' -z 67097 ']' 00:06:32.426 07:52:38 -- common/autotest_common.sh@930 -- # kill -0 67097 00:06:32.426 07:52:38 -- common/autotest_common.sh@931 -- # uname 00:06:32.426 07:52:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:32.426 07:52:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67097 00:06:32.426 07:52:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:32.426 killing process with pid 67097 00:06:32.426 07:52:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:32.426 07:52:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67097' 00:06:32.426 07:52:38 -- common/autotest_common.sh@945 -- # kill 67097 00:06:32.426 07:52:38 -- common/autotest_common.sh@950 -- # wait 67097 00:06:32.994 07:52:38 -- event/cpu_locks.sh@108 -- # killprocess 67113 00:06:32.994 07:52:38 -- common/autotest_common.sh@926 -- # '[' -z 67113 ']' 00:06:32.994 07:52:38 -- common/autotest_common.sh@930 -- # kill -0 67113 00:06:32.994 07:52:38 -- common/autotest_common.sh@931 -- # uname 00:06:32.994 07:52:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:32.994 07:52:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67113 00:06:32.994 killing process with pid 67113 00:06:32.994 07:52:38 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:32.994 07:52:38 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:32.994 07:52:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67113' 00:06:32.994 07:52:38 -- common/autotest_common.sh@945 -- # kill 67113 00:06:32.994 07:52:38 -- common/autotest_common.sh@950 -- # wait 67113 00:06:33.253 00:06:33.253 real 0m3.163s 00:06:33.253 user 0m3.712s 00:06:33.253 sys 0m0.716s 00:06:33.253 07:52:38 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.253 ************************************ 00:06:33.253 END TEST locking_app_on_unlocked_coremask 00:06:33.253 ************************************ 00:06:33.253 07:52:38 -- common/autotest_common.sh@10 -- # set +x 00:06:33.253 07:52:38 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:33.253 07:52:38 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:33.253 07:52:38 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:33.253 07:52:38 -- common/autotest_common.sh@10 -- # set +x 00:06:33.253 ************************************ 00:06:33.253 START TEST locking_app_on_locked_coremask 00:06:33.253 ************************************ 00:06:33.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.253 07:52:38 -- common/autotest_common.sh@1104 -- # locking_app_on_locked_coremask 00:06:33.253 07:52:38 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=67173 00:06:33.253 07:52:38 -- event/cpu_locks.sh@116 -- # waitforlisten 67173 /var/tmp/spdk.sock 00:06:33.253 07:52:38 -- common/autotest_common.sh@819 -- # '[' -z 67173 ']' 00:06:33.253 07:52:38 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.253 07:52:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.253 07:52:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:33.253 07:52:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.253 07:52:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:33.253 07:52:38 -- common/autotest_common.sh@10 -- # set +x 00:06:33.253 [2024-07-13 07:52:38.938555] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:33.253 [2024-07-13 07:52:38.938643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67173 ] 00:06:33.512 [2024-07-13 07:52:39.070034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.512 [2024-07-13 07:52:39.103179] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:33.512 [2024-07-13 07:52:39.103337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:34.080 07:52:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.080 07:52:39 -- common/autotest_common.sh@852 -- # return 0 00:06:34.080 07:52:39 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:34.080 07:52:39 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=67185 00:06:34.080 07:52:39 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 67185 /var/tmp/spdk2.sock 00:06:34.080 07:52:39 -- common/autotest_common.sh@640 -- # local es=0 00:06:34.080 07:52:39 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 67185 /var/tmp/spdk2.sock 00:06:34.080 07:52:39 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:34.080 07:52:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:34.080 07:52:39 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:34.080 07:52:39 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:34.080 07:52:39 -- common/autotest_common.sh@643 -- # waitforlisten 67185 /var/tmp/spdk2.sock 00:06:34.080 07:52:39 -- common/autotest_common.sh@819 -- # '[' -z 67185 ']' 00:06:34.081 07:52:39 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.081 07:52:39 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:34.081 07:52:39 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.081 07:52:39 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:34.081 07:52:39 -- common/autotest_common.sh@10 -- # set +x 00:06:34.081 [2024-07-13 07:52:39.889581] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:34.081 [2024-07-13 07:52:39.889900] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67185 ] 00:06:34.340 [2024-07-13 07:52:40.030924] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 67173 has claimed it. 00:06:34.340 [2024-07-13 07:52:40.031052] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:34.928 ERROR: process (pid: 67185) is no longer running 00:06:34.928 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (67185) - No such process 00:06:34.928 07:52:40 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:34.928 07:52:40 -- common/autotest_common.sh@852 -- # return 1 00:06:34.928 07:52:40 -- common/autotest_common.sh@643 -- # es=1 00:06:34.928 07:52:40 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:34.928 07:52:40 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:34.928 07:52:40 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:34.928 07:52:40 -- event/cpu_locks.sh@122 -- # locks_exist 67173 00:06:34.928 07:52:40 -- event/cpu_locks.sh@22 -- # lslocks -p 67173 00:06:34.928 07:52:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:35.189 07:52:40 -- event/cpu_locks.sh@124 -- # killprocess 67173 00:06:35.189 07:52:40 -- common/autotest_common.sh@926 -- # '[' -z 67173 ']' 00:06:35.189 07:52:40 -- common/autotest_common.sh@930 -- # kill -0 67173 00:06:35.189 07:52:40 -- common/autotest_common.sh@931 -- # uname 00:06:35.189 07:52:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:35.189 07:52:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67173 00:06:35.189 killing process with pid 67173 00:06:35.189 07:52:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:35.189 07:52:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:35.189 07:52:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67173' 00:06:35.189 07:52:40 -- common/autotest_common.sh@945 -- # kill 67173 00:06:35.189 07:52:40 -- common/autotest_common.sh@950 -- # wait 67173 00:06:35.447 ************************************ 00:06:35.447 END TEST locking_app_on_locked_coremask 00:06:35.447 ************************************ 00:06:35.447 00:06:35.447 real 0m2.307s 00:06:35.447 user 0m2.740s 00:06:35.447 sys 0m0.455s 00:06:35.447 07:52:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.447 07:52:41 -- common/autotest_common.sh@10 -- # set +x 00:06:35.447 07:52:41 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:35.447 07:52:41 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:35.447 07:52:41 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:35.447 07:52:41 -- common/autotest_common.sh@10 -- # set +x 00:06:35.447 ************************************ 00:06:35.447 START TEST locking_overlapped_coremask 00:06:35.447 ************************************ 00:06:35.447 07:52:41 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask 00:06:35.447 07:52:41 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=67236 00:06:35.447 07:52:41 -- event/cpu_locks.sh@133 -- # waitforlisten 67236 /var/tmp/spdk.sock 00:06:35.447 07:52:41 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:35.447 07:52:41 -- common/autotest_common.sh@819 -- # '[' -z 67236 ']' 00:06:35.447 07:52:41 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.447 07:52:41 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:35.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.447 07:52:41 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.447 07:52:41 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:35.447 07:52:41 -- common/autotest_common.sh@10 -- # set +x 00:06:35.707 [2024-07-13 07:52:41.299670] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:35.707 [2024-07-13 07:52:41.299796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67236 ] 00:06:35.707 [2024-07-13 07:52:41.436323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.707 [2024-07-13 07:52:41.473275] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:35.707 [2024-07-13 07:52:41.474001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.707 [2024-07-13 07:52:41.474095] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.707 [2024-07-13 07:52:41.474099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.641 07:52:42 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:36.641 07:52:42 -- common/autotest_common.sh@852 -- # return 0 00:06:36.641 07:52:42 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=67254 00:06:36.642 07:52:42 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:36.642 07:52:42 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 67254 /var/tmp/spdk2.sock 00:06:36.642 07:52:42 -- common/autotest_common.sh@640 -- # local es=0 00:06:36.642 07:52:42 -- common/autotest_common.sh@642 -- # valid_exec_arg waitforlisten 67254 /var/tmp/spdk2.sock 00:06:36.642 07:52:42 -- common/autotest_common.sh@628 -- # local arg=waitforlisten 00:06:36.642 07:52:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:36.642 07:52:42 -- common/autotest_common.sh@632 -- # type -t waitforlisten 00:06:36.642 07:52:42 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:36.642 07:52:42 -- common/autotest_common.sh@643 -- # waitforlisten 67254 /var/tmp/spdk2.sock 00:06:36.642 07:52:42 -- common/autotest_common.sh@819 -- # '[' -z 67254 ']' 00:06:36.642 07:52:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:36.642 07:52:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:36.642 07:52:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:36.642 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:36.642 07:52:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:36.642 07:52:42 -- common/autotest_common.sh@10 -- # set +x 00:06:36.642 [2024-07-13 07:52:42.269781] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:36.642 [2024-07-13 07:52:42.269886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67254 ] 00:06:36.642 [2024-07-13 07:52:42.410425] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67236 has claimed it. 00:06:36.642 [2024-07-13 07:52:42.410498] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:37.208 ERROR: process (pid: 67254) is no longer running 00:06:37.208 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: kill: (67254) - No such process 00:06:37.208 07:52:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:37.208 07:52:43 -- common/autotest_common.sh@852 -- # return 1 00:06:37.208 07:52:43 -- common/autotest_common.sh@643 -- # es=1 00:06:37.208 07:52:43 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:37.208 07:52:43 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:37.208 07:52:43 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:37.208 07:52:43 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:37.208 07:52:43 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:37.208 07:52:43 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:37.208 07:52:43 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:37.208 07:52:43 -- event/cpu_locks.sh@141 -- # killprocess 67236 00:06:37.208 07:52:43 -- common/autotest_common.sh@926 -- # '[' -z 67236 ']' 00:06:37.208 07:52:43 -- common/autotest_common.sh@930 -- # kill -0 67236 00:06:37.208 07:52:43 -- common/autotest_common.sh@931 -- # uname 00:06:37.208 07:52:43 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:37.208 07:52:43 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67236 00:06:37.466 killing process with pid 67236 00:06:37.467 07:52:43 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:37.467 07:52:43 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:37.467 07:52:43 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67236' 00:06:37.467 07:52:43 -- common/autotest_common.sh@945 -- # kill 67236 00:06:37.467 07:52:43 -- common/autotest_common.sh@950 -- # wait 67236 00:06:37.467 00:06:37.467 real 0m2.035s 00:06:37.467 user 0m5.858s 00:06:37.467 sys 0m0.319s 00:06:37.467 07:52:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.467 ************************************ 00:06:37.467 END TEST locking_overlapped_coremask 00:06:37.467 ************************************ 00:06:37.467 07:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:37.725 07:52:43 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:37.725 07:52:43 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:37.725 07:52:43 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:37.725 07:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:37.725 ************************************ 00:06:37.725 START TEST locking_overlapped_coremask_via_rpc 00:06:37.725 ************************************ 00:06:37.725 07:52:43 -- common/autotest_common.sh@1104 -- # locking_overlapped_coremask_via_rpc 00:06:37.725 07:52:43 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=67294 00:06:37.725 07:52:43 -- event/cpu_locks.sh@149 -- # waitforlisten 67294 /var/tmp/spdk.sock 00:06:37.725 07:52:43 -- common/autotest_common.sh@819 -- # '[' -z 67294 ']' 00:06:37.725 07:52:43 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:37.725 07:52:43 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.725 07:52:43 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:37.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.725 07:52:43 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.725 07:52:43 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:37.725 07:52:43 -- common/autotest_common.sh@10 -- # set +x 00:06:37.725 [2024-07-13 07:52:43.381366] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:37.725 [2024-07-13 07:52:43.381489] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67294 ] 00:06:37.725 [2024-07-13 07:52:43.514018] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:37.725 [2024-07-13 07:52:43.514121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:37.982 [2024-07-13 07:52:43.550682] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.982 [2024-07-13 07:52:43.551032] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:37.982 [2024-07-13 07:52:43.552076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:37.982 [2024-07-13 07:52:43.552090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.547 07:52:44 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:38.547 07:52:44 -- common/autotest_common.sh@852 -- # return 0 00:06:38.547 07:52:44 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=67312 00:06:38.547 07:52:44 -- event/cpu_locks.sh@153 -- # waitforlisten 67312 /var/tmp/spdk2.sock 00:06:38.548 07:52:44 -- common/autotest_common.sh@819 -- # '[' -z 67312 ']' 00:06:38.548 07:52:44 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:38.548 07:52:44 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:38.548 07:52:44 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:38.548 07:52:44 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:38.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:38.548 07:52:44 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:38.548 07:52:44 -- common/autotest_common.sh@10 -- # set +x 00:06:38.548 [2024-07-13 07:52:44.355467] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:38.548 [2024-07-13 07:52:44.355557] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67312 ] 00:06:38.806 [2024-07-13 07:52:44.499197] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:38.806 [2024-07-13 07:52:44.499243] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.806 [2024-07-13 07:52:44.563806] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.806 [2024-07-13 07:52:44.564091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:38.806 [2024-07-13 07:52:44.564238] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:38.806 [2024-07-13 07:52:44.564240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.740 07:52:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.740 07:52:45 -- common/autotest_common.sh@852 -- # return 0 00:06:39.740 07:52:45 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.740 07:52:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:39.740 07:52:45 -- common/autotest_common.sh@10 -- # set +x 00:06:39.740 07:52:45 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:39.740 07:52:45 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.740 07:52:45 -- common/autotest_common.sh@640 -- # local es=0 00:06:39.740 07:52:45 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.740 07:52:45 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:06:39.740 07:52:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:39.740 07:52:45 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:06:39.740 07:52:45 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:39.740 07:52:45 -- common/autotest_common.sh@643 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:39.741 07:52:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:39.741 07:52:45 -- common/autotest_common.sh@10 -- # set +x 00:06:39.741 [2024-07-13 07:52:45.242950] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 67294 has claimed it. 00:06:39.741 request: 00:06:39.741 { 00:06:39.741 "method": "framework_enable_cpumask_locks", 00:06:39.741 "req_id": 1 00:06:39.741 } 00:06:39.741 Got JSON-RPC error response 00:06:39.741 response: 00:06:39.741 { 00:06:39.741 "code": -32603, 00:06:39.741 "message": "Failed to claim CPU core: 2" 00:06:39.741 } 00:06:39.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.741 07:52:45 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:06:39.741 07:52:45 -- common/autotest_common.sh@643 -- # es=1 00:06:39.741 07:52:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:39.741 07:52:45 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:39.741 07:52:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:39.741 07:52:45 -- event/cpu_locks.sh@158 -- # waitforlisten 67294 /var/tmp/spdk.sock 00:06:39.741 07:52:45 -- common/autotest_common.sh@819 -- # '[' -z 67294 ']' 00:06:39.741 07:52:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.741 07:52:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.741 07:52:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.741 07:52:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.741 07:52:45 -- common/autotest_common.sh@10 -- # set +x 00:06:39.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.741 07:52:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:39.741 07:52:45 -- common/autotest_common.sh@852 -- # return 0 00:06:39.741 07:52:45 -- event/cpu_locks.sh@159 -- # waitforlisten 67312 /var/tmp/spdk2.sock 00:06:39.741 07:52:45 -- common/autotest_common.sh@819 -- # '[' -z 67312 ']' 00:06:39.741 07:52:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.741 07:52:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:39.741 07:52:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.741 07:52:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:39.741 07:52:45 -- common/autotest_common.sh@10 -- # set +x 00:06:40.000 ************************************ 00:06:40.000 END TEST locking_overlapped_coremask_via_rpc 00:06:40.000 ************************************ 00:06:40.000 07:52:45 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:40.000 07:52:45 -- common/autotest_common.sh@852 -- # return 0 00:06:40.000 07:52:45 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:40.000 07:52:45 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:40.000 07:52:45 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:40.000 07:52:45 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:40.000 00:06:40.000 real 0m2.404s 00:06:40.000 user 0m1.177s 00:06:40.000 sys 0m0.163s 00:06:40.000 07:52:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.000 07:52:45 -- common/autotest_common.sh@10 -- # set +x 00:06:40.000 07:52:45 -- event/cpu_locks.sh@174 -- # cleanup 00:06:40.000 07:52:45 -- event/cpu_locks.sh@15 -- # [[ -z 67294 ]] 00:06:40.000 07:52:45 -- event/cpu_locks.sh@15 -- # killprocess 67294 00:06:40.000 07:52:45 -- common/autotest_common.sh@926 -- # '[' -z 67294 ']' 00:06:40.000 07:52:45 -- common/autotest_common.sh@930 -- # kill -0 67294 00:06:40.000 07:52:45 -- common/autotest_common.sh@931 -- # uname 00:06:40.000 07:52:45 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:40.000 07:52:45 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67294 00:06:40.000 killing process with pid 67294 00:06:40.000 07:52:45 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:40.000 07:52:45 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:40.000 07:52:45 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67294' 00:06:40.000 07:52:45 -- common/autotest_common.sh@945 -- # kill 67294 00:06:40.000 07:52:45 -- common/autotest_common.sh@950 -- # wait 67294 00:06:40.259 07:52:46 -- event/cpu_locks.sh@16 -- # [[ -z 67312 ]] 00:06:40.259 07:52:46 -- event/cpu_locks.sh@16 -- # killprocess 67312 00:06:40.259 07:52:46 -- common/autotest_common.sh@926 -- # '[' -z 67312 ']' 00:06:40.259 07:52:46 -- common/autotest_common.sh@930 -- # kill -0 67312 00:06:40.259 07:52:46 -- common/autotest_common.sh@931 -- # uname 00:06:40.259 07:52:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:40.259 07:52:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67312 00:06:40.259 killing process with pid 67312 00:06:40.259 07:52:46 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:06:40.259 07:52:46 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:06:40.259 07:52:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67312' 00:06:40.259 07:52:46 -- common/autotest_common.sh@945 -- # kill 67312 00:06:40.259 07:52:46 -- common/autotest_common.sh@950 -- # wait 67312 00:06:40.518 07:52:46 -- event/cpu_locks.sh@18 -- # rm -f 00:06:40.518 07:52:46 -- event/cpu_locks.sh@1 -- # cleanup 00:06:40.518 07:52:46 -- event/cpu_locks.sh@15 -- # [[ -z 67294 ]] 00:06:40.518 07:52:46 -- event/cpu_locks.sh@15 -- # killprocess 67294 00:06:40.518 07:52:46 -- common/autotest_common.sh@926 -- # '[' -z 67294 ']' 00:06:40.518 Process with pid 67294 is not found 00:06:40.518 Process with pid 67312 is not found 00:06:40.518 07:52:46 -- common/autotest_common.sh@930 -- # kill -0 67294 00:06:40.518 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (67294) - No such process 00:06:40.518 07:52:46 -- common/autotest_common.sh@953 -- # echo 'Process with pid 67294 is not found' 00:06:40.518 07:52:46 -- event/cpu_locks.sh@16 -- # [[ -z 67312 ]] 00:06:40.518 07:52:46 -- event/cpu_locks.sh@16 -- # killprocess 67312 00:06:40.518 07:52:46 -- common/autotest_common.sh@926 -- # '[' -z 67312 ']' 00:06:40.518 07:52:46 -- common/autotest_common.sh@930 -- # kill -0 67312 00:06:40.518 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (67312) - No such process 00:06:40.518 07:52:46 -- common/autotest_common.sh@953 -- # echo 'Process with pid 67312 is not found' 00:06:40.518 07:52:46 -- event/cpu_locks.sh@18 -- # rm -f 00:06:40.518 00:06:40.518 real 0m17.641s 00:06:40.518 user 0m32.069s 00:06:40.518 sys 0m3.997s 00:06:40.518 07:52:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.518 07:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:40.518 ************************************ 00:06:40.518 END TEST cpu_locks 00:06:40.518 ************************************ 00:06:40.518 00:06:40.518 real 0m43.002s 00:06:40.518 user 1m23.800s 00:06:40.518 sys 0m7.068s 00:06:40.518 07:52:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.518 07:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:40.518 ************************************ 00:06:40.518 END TEST event 00:06:40.518 ************************************ 00:06:40.777 07:52:46 -- spdk/autotest.sh@188 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:40.777 07:52:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:40.777 07:52:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.777 07:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:40.777 ************************************ 00:06:40.777 START TEST thread 00:06:40.777 ************************************ 00:06:40.777 07:52:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:06:40.777 * Looking for test storage... 00:06:40.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:06:40.777 07:52:46 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:40.777 07:52:46 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:40.777 07:52:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:40.777 07:52:46 -- common/autotest_common.sh@10 -- # set +x 00:06:40.777 ************************************ 00:06:40.777 START TEST thread_poller_perf 00:06:40.777 ************************************ 00:06:40.777 07:52:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:40.777 [2024-07-13 07:52:46.453592] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:40.777 [2024-07-13 07:52:46.453700] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67428 ] 00:06:40.777 [2024-07-13 07:52:46.591014] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.036 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:41.036 [2024-07-13 07:52:46.624768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.977 ====================================== 00:06:41.977 busy:2209957038 (cyc) 00:06:41.977 total_run_count: 325000 00:06:41.977 tsc_hz: 2200000000 (cyc) 00:06:41.977 ====================================== 00:06:41.977 poller_cost: 6799 (cyc), 3090 (nsec) 00:06:41.977 00:06:41.977 real 0m1.245s 00:06:41.977 user 0m1.103s 00:06:41.977 sys 0m0.036s 00:06:41.977 07:52:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.977 07:52:47 -- common/autotest_common.sh@10 -- # set +x 00:06:41.977 ************************************ 00:06:41.977 END TEST thread_poller_perf 00:06:41.977 ************************************ 00:06:41.977 07:52:47 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.977 07:52:47 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:06:41.977 07:52:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:41.977 07:52:47 -- common/autotest_common.sh@10 -- # set +x 00:06:41.977 ************************************ 00:06:41.977 START TEST thread_poller_perf 00:06:41.977 ************************************ 00:06:41.977 07:52:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:41.977 [2024-07-13 07:52:47.751641] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:41.977 [2024-07-13 07:52:47.751744] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67458 ] 00:06:42.245 [2024-07-13 07:52:47.887808] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.245 [2024-07-13 07:52:47.923908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.245 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:43.179 ====================================== 00:06:43.179 busy:2202372832 (cyc) 00:06:43.179 total_run_count: 4468000 00:06:43.179 tsc_hz: 2200000000 (cyc) 00:06:43.179 ====================================== 00:06:43.179 poller_cost: 492 (cyc), 223 (nsec) 00:06:43.179 00:06:43.179 real 0m1.245s 00:06:43.179 user 0m1.094s 00:06:43.179 sys 0m0.044s 00:06:43.179 07:52:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.179 07:52:48 -- common/autotest_common.sh@10 -- # set +x 00:06:43.179 ************************************ 00:06:43.179 END TEST thread_poller_perf 00:06:43.179 ************************************ 00:06:43.438 07:52:49 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:43.438 00:06:43.438 real 0m2.667s 00:06:43.438 user 0m2.260s 00:06:43.438 sys 0m0.194s 00:06:43.438 07:52:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.438 07:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:43.438 ************************************ 00:06:43.438 END TEST thread 00:06:43.438 ************************************ 00:06:43.438 07:52:49 -- spdk/autotest.sh@189 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:43.438 07:52:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:06:43.438 07:52:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:43.438 07:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:43.438 ************************************ 00:06:43.438 START TEST accel 00:06:43.438 ************************************ 00:06:43.438 07:52:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:06:43.438 * Looking for test storage... 00:06:43.438 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:06:43.438 07:52:49 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:06:43.438 07:52:49 -- accel/accel.sh@74 -- # get_expected_opcs 00:06:43.438 07:52:49 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:43.438 07:52:49 -- accel/accel.sh@59 -- # spdk_tgt_pid=67526 00:06:43.438 07:52:49 -- accel/accel.sh@60 -- # waitforlisten 67526 00:06:43.438 07:52:49 -- common/autotest_common.sh@819 -- # '[' -z 67526 ']' 00:06:43.438 07:52:49 -- accel/accel.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:43.438 07:52:49 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.438 07:52:49 -- common/autotest_common.sh@824 -- # local max_retries=100 00:06:43.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.438 07:52:49 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.438 07:52:49 -- accel/accel.sh@58 -- # build_accel_config 00:06:43.438 07:52:49 -- common/autotest_common.sh@828 -- # xtrace_disable 00:06:43.438 07:52:49 -- common/autotest_common.sh@10 -- # set +x 00:06:43.438 07:52:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:43.438 07:52:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.438 07:52:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.438 07:52:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:43.438 07:52:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:43.438 07:52:49 -- accel/accel.sh@41 -- # local IFS=, 00:06:43.438 07:52:49 -- accel/accel.sh@42 -- # jq -r . 00:06:43.438 [2024-07-13 07:52:49.219382] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:43.438 [2024-07-13 07:52:49.219494] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67526 ] 00:06:43.733 [2024-07-13 07:52:49.359241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.733 [2024-07-13 07:52:49.392732] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:43.733 [2024-07-13 07:52:49.392933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.667 07:52:50 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:06:44.667 07:52:50 -- common/autotest_common.sh@852 -- # return 0 00:06:44.667 07:52:50 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:44.667 07:52:50 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:06:44.667 07:52:50 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:44.667 07:52:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:06:44.667 07:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:44.667 07:52:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:06:44.667 07:52:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # IFS== 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.667 07:52:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.667 07:52:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # IFS== 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.667 07:52:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.667 07:52:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # IFS== 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.667 07:52:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.667 07:52:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # IFS== 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.667 07:52:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.667 07:52:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # IFS== 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.667 07:52:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.667 07:52:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # IFS== 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.667 07:52:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.667 07:52:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # IFS== 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.667 07:52:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.667 07:52:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # IFS== 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.667 07:52:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.667 07:52:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # IFS== 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.667 07:52:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.667 07:52:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # IFS== 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.667 07:52:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.667 07:52:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # IFS== 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.667 07:52:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.667 07:52:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # IFS== 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.667 07:52:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.667 07:52:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # IFS== 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.667 07:52:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.667 07:52:50 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # IFS== 00:06:44.667 07:52:50 -- accel/accel.sh@64 -- # read -r opc module 00:06:44.667 07:52:50 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:06:44.667 07:52:50 -- accel/accel.sh@67 -- # killprocess 67526 00:06:44.667 07:52:50 -- common/autotest_common.sh@926 -- # '[' -z 67526 ']' 00:06:44.667 07:52:50 -- common/autotest_common.sh@930 -- # kill -0 67526 00:06:44.667 07:52:50 -- common/autotest_common.sh@931 -- # uname 00:06:44.667 07:52:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:06:44.667 07:52:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 67526 00:06:44.667 07:52:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:06:44.667 07:52:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:06:44.667 07:52:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 67526' 00:06:44.667 killing process with pid 67526 00:06:44.667 07:52:50 -- common/autotest_common.sh@945 -- # kill 67526 00:06:44.667 07:52:50 -- common/autotest_common.sh@950 -- # wait 67526 00:06:44.925 07:52:50 -- accel/accel.sh@68 -- # trap - ERR 00:06:44.925 07:52:50 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:06:44.925 07:52:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:06:44.925 07:52:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.925 07:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:44.925 07:52:50 -- common/autotest_common.sh@1104 -- # accel_perf -h 00:06:44.925 07:52:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:44.925 07:52:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.925 07:52:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.925 07:52:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.925 07:52:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.925 07:52:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.925 07:52:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.925 07:52:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.925 07:52:50 -- accel/accel.sh@42 -- # jq -r . 00:06:44.925 07:52:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.925 07:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:44.925 07:52:50 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:44.925 07:52:50 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:44.925 07:52:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:44.925 07:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:44.925 ************************************ 00:06:44.925 START TEST accel_missing_filename 00:06:44.925 ************************************ 00:06:44.925 07:52:50 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress 00:06:44.925 07:52:50 -- common/autotest_common.sh@640 -- # local es=0 00:06:44.925 07:52:50 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:44.925 07:52:50 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:44.925 07:52:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:44.925 07:52:50 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:44.925 07:52:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:44.926 07:52:50 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress 00:06:44.926 07:52:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:44.926 07:52:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:44.926 07:52:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:44.926 07:52:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.926 07:52:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.926 07:52:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:44.926 07:52:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:44.926 07:52:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:44.926 07:52:50 -- accel/accel.sh@42 -- # jq -r . 00:06:44.926 [2024-07-13 07:52:50.628465] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:44.926 [2024-07-13 07:52:50.628987] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67583 ] 00:06:45.184 [2024-07-13 07:52:50.764708] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.184 [2024-07-13 07:52:50.796422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.184 [2024-07-13 07:52:50.825361] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.184 [2024-07-13 07:52:50.866084] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:45.184 A filename is required. 00:06:45.184 07:52:50 -- common/autotest_common.sh@643 -- # es=234 00:06:45.184 07:52:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:45.184 07:52:50 -- common/autotest_common.sh@652 -- # es=106 00:06:45.184 07:52:50 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:45.184 07:52:50 -- common/autotest_common.sh@660 -- # es=1 00:06:45.184 07:52:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:45.184 00:06:45.184 real 0m0.339s 00:06:45.184 user 0m0.213s 00:06:45.184 sys 0m0.070s 00:06:45.184 ************************************ 00:06:45.184 END TEST accel_missing_filename 00:06:45.184 ************************************ 00:06:45.184 07:52:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.184 07:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:45.184 07:52:50 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:45.184 07:52:50 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:45.184 07:52:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.184 07:52:50 -- common/autotest_common.sh@10 -- # set +x 00:06:45.184 ************************************ 00:06:45.184 START TEST accel_compress_verify 00:06:45.184 ************************************ 00:06:45.184 07:52:50 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:45.184 07:52:50 -- common/autotest_common.sh@640 -- # local es=0 00:06:45.184 07:52:50 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:45.184 07:52:50 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:45.184 07:52:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.184 07:52:50 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:45.184 07:52:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.184 07:52:50 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:45.184 07:52:50 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:06:45.184 07:52:50 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.184 07:52:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.184 07:52:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.184 07:52:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.184 07:52:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.184 07:52:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.184 07:52:50 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.184 07:52:50 -- accel/accel.sh@42 -- # jq -r . 00:06:45.442 [2024-07-13 07:52:51.013045] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:45.442 [2024-07-13 07:52:51.013130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67602 ] 00:06:45.442 [2024-07-13 07:52:51.148517] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.442 [2024-07-13 07:52:51.180310] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.442 [2024-07-13 07:52:51.209535] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:45.442 [2024-07-13 07:52:51.249548] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:06:45.701 00:06:45.701 Compression does not support the verify option, aborting. 00:06:45.701 07:52:51 -- common/autotest_common.sh@643 -- # es=161 00:06:45.701 07:52:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:45.701 07:52:51 -- common/autotest_common.sh@652 -- # es=33 00:06:45.701 07:52:51 -- common/autotest_common.sh@653 -- # case "$es" in 00:06:45.701 07:52:51 -- common/autotest_common.sh@660 -- # es=1 00:06:45.701 07:52:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:45.701 ************************************ 00:06:45.701 END TEST accel_compress_verify 00:06:45.701 ************************************ 00:06:45.701 00:06:45.701 real 0m0.316s 00:06:45.701 user 0m0.190s 00:06:45.701 sys 0m0.069s 00:06:45.701 07:52:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.701 07:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:45.701 07:52:51 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:45.701 07:52:51 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:45.701 07:52:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.701 07:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:45.701 /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat: trap: line 2: unexpected EOF while looking for matching `)' 00:06:45.701 /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat: line 52: unexpected EOF while looking for matching `)' 00:06:45.701 ************************************ 00:06:45.701 START TEST accel_wrong_workload 00:06:45.701 ************************************ 00:06:45.701 07:52:51 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w foobar 00:06:45.701 07:52:51 -- common/autotest_common.sh@640 -- # local es=0 00:06:45.701 07:52:51 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:45.701 07:52:51 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:45.701 07:52:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.701 07:52:51 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:45.701 07:52:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.701 07:52:51 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w foobar 00:06:45.701 07:52:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:45.701 07:52:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.701 07:52:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.701 07:52:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.701 07:52:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.701 07:52:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.701 07:52:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.701 07:52:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.701 07:52:51 -- accel/accel.sh@42 -- # jq -r . 00:06:45.701 Unsupported workload type: foobar 00:06:45.701 [2024-07-13 07:52:51.372797] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:45.701 accel_perf options: 00:06:45.701 [-h help message] 00:06:45.701 [-q queue depth per core] 00:06:45.701 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:45.701 [-T number of threads per core 00:06:45.701 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:45.701 [-t time in seconds] 00:06:45.701 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:45.701 [ dif_verify, , dif_generate, dif_generate_copy 00:06:45.701 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:45.701 [-l for compress/decompress workloads, name of uncompressed input file 00:06:45.701 [-S for crc32c workload, use this seed value (default 0) 00:06:45.701 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:45.701 [-f for fill workload, use this BYTE value (default 255) 00:06:45.701 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:45.701 [-y verify result if this switch is on] 00:06:45.701 [-a tasks to allocate per core (default: same value as -q)] 00:06:45.701 Can be used to spread operations across a wider range of memory. 00:06:45.701 07:52:51 -- common/autotest_common.sh@643 -- # es=1 00:06:45.701 07:52:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:45.701 07:52:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:45.701 07:52:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:45.701 00:06:45.701 real 0m0.023s 00:06:45.701 user 0m0.014s 00:06:45.701 sys 0m0.009s 00:06:45.701 07:52:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.701 07:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:45.701 ************************************ 00:06:45.701 END TEST accel_wrong_workload 00:06:45.701 ************************************ 00:06:45.701 07:52:51 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:45.701 07:52:51 -- common/autotest_common.sh@1077 -- # '[' 10 -le 1 ']' 00:06:45.702 07:52:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.702 07:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:45.702 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:06:45.702 ************************************ 00:06:45.702 START TEST accel_negative_buffers 00:06:45.702 ************************************ 00:06:45.702 07:52:51 -- common/autotest_common.sh@1104 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:45.702 07:52:51 -- common/autotest_common.sh@640 -- # local es=0 00:06:45.702 07:52:51 -- common/autotest_common.sh@642 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:45.702 07:52:51 -- common/autotest_common.sh@628 -- # local arg=accel_perf 00:06:45.702 07:52:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.702 07:52:51 -- common/autotest_common.sh@632 -- # type -t accel_perf 00:06:45.702 07:52:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:06:45.702 07:52:51 -- common/autotest_common.sh@643 -- # accel_perf -t 1 -w xor -y -x -1 00:06:45.702 07:52:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:45.702 07:52:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.702 07:52:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.702 07:52:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.702 07:52:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.702 07:52:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.702 07:52:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.702 07:52:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.702 07:52:51 -- accel/accel.sh@42 -- # jq -r . 00:06:45.702 -x option must be non-negative. 00:06:45.702 [2024-07-13 07:52:51.441463] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:45.702 accel_perf options: 00:06:45.702 [-h help message] 00:06:45.702 [-q queue depth per core] 00:06:45.702 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:45.702 [-T number of threads per core 00:06:45.702 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:45.702 [-t time in seconds] 00:06:45.702 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:45.702 [ dif_verify, , dif_generate, dif_generate_copy 00:06:45.702 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:45.702 [-l for compress/decompress workloads, name of uncompressed input file 00:06:45.702 [-S for crc32c workload, use this seed value (default 0) 00:06:45.702 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:45.702 [-f for fill workload, use this BYTE value (default 255) 00:06:45.702 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:45.702 [-y verify result if this switch is on] 00:06:45.702 [-a tasks to allocate per core (default: same value as -q)] 00:06:45.702 Can be used to spread operations across a wider range of memory. 00:06:45.702 07:52:51 -- common/autotest_common.sh@643 -- # es=1 00:06:45.702 07:52:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:06:45.702 07:52:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:06:45.702 ************************************ 00:06:45.702 END TEST accel_negative_buffers 00:06:45.702 ************************************ 00:06:45.702 07:52:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:06:45.702 00:06:45.702 real 0m0.025s 00:06:45.702 user 0m0.016s 00:06:45.702 sys 0m0.009s 00:06:45.702 07:52:51 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:45.702 07:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:45.702 07:52:51 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:45.702 07:52:51 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:45.702 07:52:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:45.702 07:52:51 -- common/autotest_common.sh@10 -- # set +x 00:06:45.702 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:06:45.702 ************************************ 00:06:45.702 START TEST accel_crc32c 00:06:45.702 ************************************ 00:06:45.702 07:52:51 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:45.702 07:52:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:45.702 07:52:51 -- accel/accel.sh@17 -- # local accel_module 00:06:45.702 07:52:51 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:45.702 07:52:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:45.702 07:52:51 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:45.702 07:52:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:45.702 07:52:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:45.702 07:52:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:45.702 07:52:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:45.702 07:52:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:45.702 07:52:51 -- accel/accel.sh@41 -- # local IFS=, 00:06:45.702 07:52:51 -- accel/accel.sh@42 -- # jq -r . 00:06:45.702 [2024-07-13 07:52:51.508565] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:45.702 [2024-07-13 07:52:51.508643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67661 ] 00:06:45.961 [2024-07-13 07:52:51.639106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.961 [2024-07-13 07:52:51.671521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.334 07:52:52 -- accel/accel.sh@18 -- # out=' 00:06:47.334 SPDK Configuration: 00:06:47.334 Core mask: 0x1 00:06:47.334 00:06:47.334 Accel Perf Configuration: 00:06:47.334 Workload Type: crc32c 00:06:47.334 CRC-32C seed: 32 00:06:47.334 Transfer size: 4096 bytes 00:06:47.334 Vector count 1 00:06:47.334 Module: software 00:06:47.334 Queue depth: 32 00:06:47.334 Allocate depth: 32 00:06:47.334 # threads/core: 1 00:06:47.334 Run time: 1 seconds 00:06:47.334 Verify: Yes 00:06:47.334 00:06:47.334 Running for 1 seconds... 00:06:47.334 00:06:47.334 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:47.334 ------------------------------------------------------------------------------------ 00:06:47.334 0,0 491648/s 1920 MiB/s 0 0 00:06:47.334 ==================================================================================== 00:06:47.334 Total 491648/s 1920 MiB/s 0 0' 00:06:47.334 07:52:52 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:52 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:47.334 07:52:52 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:52 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.334 07:52:52 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:47.334 07:52:52 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:47.334 07:52:52 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.334 07:52:52 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.334 07:52:52 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:47.334 07:52:52 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:47.334 07:52:52 -- accel/accel.sh@41 -- # local IFS=, 00:06:47.334 07:52:52 -- accel/accel.sh@42 -- # jq -r . 00:06:47.334 [2024-07-13 07:52:52.823868] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:47.334 [2024-07-13 07:52:52.823958] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67669 ] 00:06:47.334 [2024-07-13 07:52:52.961966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.334 [2024-07-13 07:52:52.999138] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val= 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val= 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val=0x1 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val= 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val= 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val=crc32c 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val=32 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val= 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val=software 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@23 -- # accel_module=software 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val=32 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val=32 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val=1 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val=Yes 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val= 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:47.334 07:52:53 -- accel/accel.sh@21 -- # val= 00:06:47.334 07:52:53 -- accel/accel.sh@22 -- # case "$var" in 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # IFS=: 00:06:47.334 07:52:53 -- accel/accel.sh@20 -- # read -r var val 00:06:48.708 07:52:54 -- accel/accel.sh@21 -- # val= 00:06:48.708 07:52:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.708 07:52:54 -- accel/accel.sh@20 -- # IFS=: 00:06:48.708 07:52:54 -- accel/accel.sh@20 -- # read -r var val 00:06:48.708 07:52:54 -- accel/accel.sh@21 -- # val= 00:06:48.708 07:52:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.708 07:52:54 -- accel/accel.sh@20 -- # IFS=: 00:06:48.708 07:52:54 -- accel/accel.sh@20 -- # read -r var val 00:06:48.708 07:52:54 -- accel/accel.sh@21 -- # val= 00:06:48.708 07:52:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.708 07:52:54 -- accel/accel.sh@20 -- # IFS=: 00:06:48.708 07:52:54 -- accel/accel.sh@20 -- # read -r var val 00:06:48.708 ************************************ 00:06:48.708 END TEST accel_crc32c 00:06:48.708 ************************************ 00:06:48.708 07:52:54 -- accel/accel.sh@21 -- # val= 00:06:48.708 07:52:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.708 07:52:54 -- accel/accel.sh@20 -- # IFS=: 00:06:48.708 07:52:54 -- accel/accel.sh@20 -- # read -r var val 00:06:48.708 07:52:54 -- accel/accel.sh@21 -- # val= 00:06:48.708 07:52:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.708 07:52:54 -- accel/accel.sh@20 -- # IFS=: 00:06:48.708 07:52:54 -- accel/accel.sh@20 -- # read -r var val 00:06:48.708 07:52:54 -- accel/accel.sh@21 -- # val= 00:06:48.708 07:52:54 -- accel/accel.sh@22 -- # case "$var" in 00:06:48.708 07:52:54 -- accel/accel.sh@20 -- # IFS=: 00:06:48.708 07:52:54 -- accel/accel.sh@20 -- # read -r var val 00:06:48.708 07:52:54 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:48.708 07:52:54 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:48.708 07:52:54 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.708 00:06:48.708 real 0m2.646s 00:06:48.708 user 0m2.299s 00:06:48.708 sys 0m0.149s 00:06:48.708 07:52:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.708 07:52:54 -- common/autotest_common.sh@10 -- # set +x 00:06:48.708 07:52:54 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:48.708 07:52:54 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:48.708 07:52:54 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:48.708 07:52:54 -- common/autotest_common.sh@10 -- # set +x 00:06:48.708 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:06:48.708 ************************************ 00:06:48.708 START TEST accel_crc32c_C2 00:06:48.708 ************************************ 00:06:48.708 07:52:54 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:48.708 07:52:54 -- accel/accel.sh@16 -- # local accel_opc 00:06:48.708 07:52:54 -- accel/accel.sh@17 -- # local accel_module 00:06:48.708 07:52:54 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:48.708 07:52:54 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:48.708 07:52:54 -- accel/accel.sh@12 -- # build_accel_config 00:06:48.708 07:52:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:48.708 07:52:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.708 07:52:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.708 07:52:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:48.708 07:52:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:48.708 07:52:54 -- accel/accel.sh@41 -- # local IFS=, 00:06:48.708 07:52:54 -- accel/accel.sh@42 -- # jq -r . 00:06:48.708 [2024-07-13 07:52:54.212431] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:48.708 [2024-07-13 07:52:54.212518] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67692 ] 00:06:48.708 [2024-07-13 07:52:54.349603] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.708 [2024-07-13 07:52:54.382395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.086 07:52:55 -- accel/accel.sh@18 -- # out=' 00:06:50.086 SPDK Configuration: 00:06:50.086 Core mask: 0x1 00:06:50.086 00:06:50.086 Accel Perf Configuration: 00:06:50.086 Workload Type: crc32c 00:06:50.086 CRC-32C seed: 0 00:06:50.086 Transfer size: 4096 bytes 00:06:50.086 Vector count 2 00:06:50.086 Module: software 00:06:50.086 Queue depth: 32 00:06:50.086 Allocate depth: 32 00:06:50.086 # threads/core: 1 00:06:50.086 Run time: 1 seconds 00:06:50.086 Verify: Yes 00:06:50.086 00:06:50.086 Running for 1 seconds... 00:06:50.086 00:06:50.086 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:50.086 ------------------------------------------------------------------------------------ 00:06:50.086 0,0 376224/s 2939 MiB/s 0 0 00:06:50.086 ==================================================================================== 00:06:50.087 Total 376224/s 1469 MiB/s 0 0' 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@12 -- # build_accel_config 00:06:50.087 07:52:55 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:50.087 07:52:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:50.087 07:52:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.087 07:52:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.087 07:52:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:50.087 07:52:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:50.087 07:52:55 -- accel/accel.sh@41 -- # local IFS=, 00:06:50.087 07:52:55 -- accel/accel.sh@42 -- # jq -r . 00:06:50.087 [2024-07-13 07:52:55.533817] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:50.087 [2024-07-13 07:52:55.533903] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67700 ] 00:06:50.087 [2024-07-13 07:52:55.669995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.087 [2024-07-13 07:52:55.701999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val= 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val= 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val=0x1 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val= 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val= 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val=crc32c 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val=0 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val= 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val=software 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@23 -- # accel_module=software 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val=32 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val=32 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val=1 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val=Yes 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val= 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:50.087 07:52:55 -- accel/accel.sh@21 -- # val= 00:06:50.087 07:52:55 -- accel/accel.sh@22 -- # case "$var" in 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # IFS=: 00:06:50.087 07:52:55 -- accel/accel.sh@20 -- # read -r var val 00:06:51.022 07:52:56 -- accel/accel.sh@21 -- # val= 00:06:51.022 07:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.022 07:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:51.022 07:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:51.022 07:52:56 -- accel/accel.sh@21 -- # val= 00:06:51.022 07:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.022 07:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:51.022 07:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:51.022 07:52:56 -- accel/accel.sh@21 -- # val= 00:06:51.022 07:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.022 07:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:51.022 07:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:51.022 07:52:56 -- accel/accel.sh@21 -- # val= 00:06:51.022 07:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.022 07:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:51.022 07:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:51.022 ************************************ 00:06:51.022 END TEST accel_crc32c_C2 00:06:51.022 ************************************ 00:06:51.022 07:52:56 -- accel/accel.sh@21 -- # val= 00:06:51.022 07:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.022 07:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:51.022 07:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:51.022 07:52:56 -- accel/accel.sh@21 -- # val= 00:06:51.022 07:52:56 -- accel/accel.sh@22 -- # case "$var" in 00:06:51.022 07:52:56 -- accel/accel.sh@20 -- # IFS=: 00:06:51.022 07:52:56 -- accel/accel.sh@20 -- # read -r var val 00:06:51.022 07:52:56 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:51.022 07:52:56 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:06:51.022 07:52:56 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.022 00:06:51.022 real 0m2.642s 00:06:51.022 user 0m2.286s 00:06:51.022 sys 0m0.155s 00:06:51.022 07:52:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.022 07:52:56 -- common/autotest_common.sh@10 -- # set +x 00:06:51.282 07:52:56 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:51.282 07:52:56 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:51.282 07:52:56 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:51.282 07:52:56 -- common/autotest_common.sh@10 -- # set +x 00:06:51.282 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:06:51.282 ************************************ 00:06:51.282 START TEST accel_copy 00:06:51.282 ************************************ 00:06:51.282 07:52:56 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy -y 00:06:51.282 07:52:56 -- accel/accel.sh@16 -- # local accel_opc 00:06:51.282 07:52:56 -- accel/accel.sh@17 -- # local accel_module 00:06:51.282 07:52:56 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:06:51.282 07:52:56 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:51.282 07:52:56 -- accel/accel.sh@12 -- # build_accel_config 00:06:51.282 07:52:56 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:51.282 07:52:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.282 07:52:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.282 07:52:56 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:51.282 07:52:56 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:51.282 07:52:56 -- accel/accel.sh@41 -- # local IFS=, 00:06:51.282 07:52:56 -- accel/accel.sh@42 -- # jq -r . 00:06:51.282 [2024-07-13 07:52:56.904086] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:51.282 [2024-07-13 07:52:56.904172] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67728 ] 00:06:51.282 [2024-07-13 07:52:57.038906] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.282 [2024-07-13 07:52:57.070520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.661 07:52:58 -- accel/accel.sh@18 -- # out=' 00:06:52.661 SPDK Configuration: 00:06:52.661 Core mask: 0x1 00:06:52.661 00:06:52.661 Accel Perf Configuration: 00:06:52.661 Workload Type: copy 00:06:52.661 Transfer size: 4096 bytes 00:06:52.661 Vector count 1 00:06:52.661 Module: software 00:06:52.661 Queue depth: 32 00:06:52.661 Allocate depth: 32 00:06:52.661 # threads/core: 1 00:06:52.661 Run time: 1 seconds 00:06:52.661 Verify: Yes 00:06:52.661 00:06:52.661 Running for 1 seconds... 00:06:52.661 00:06:52.661 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:52.661 ------------------------------------------------------------------------------------ 00:06:52.661 0,0 343296/s 1341 MiB/s 0 0 00:06:52.661 ==================================================================================== 00:06:52.661 Total 343296/s 1341 MiB/s 0 0' 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:52.661 07:52:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:52.661 07:52:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:52.661 07:52:58 -- accel/accel.sh@12 -- # build_accel_config 00:06:52.661 07:52:58 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:52.661 07:52:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.661 07:52:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.661 07:52:58 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:52.661 07:52:58 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:52.661 07:52:58 -- accel/accel.sh@41 -- # local IFS=, 00:06:52.661 07:52:58 -- accel/accel.sh@42 -- # jq -r . 00:06:52.661 [2024-07-13 07:52:58.225334] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:52.661 [2024-07-13 07:52:58.225608] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67738 ] 00:06:52.661 [2024-07-13 07:52:58.362923] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.661 [2024-07-13 07:52:58.397505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.661 07:52:58 -- accel/accel.sh@21 -- # val= 00:06:52.661 07:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:52.661 07:52:58 -- accel/accel.sh@21 -- # val= 00:06:52.661 07:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:52.661 07:52:58 -- accel/accel.sh@21 -- # val=0x1 00:06:52.661 07:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:52.661 07:52:58 -- accel/accel.sh@21 -- # val= 00:06:52.661 07:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:52.661 07:52:58 -- accel/accel.sh@21 -- # val= 00:06:52.661 07:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:52.661 07:52:58 -- accel/accel.sh@21 -- # val=copy 00:06:52.661 07:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.661 07:52:58 -- accel/accel.sh@24 -- # accel_opc=copy 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:52.661 07:52:58 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:52.661 07:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:52.661 07:52:58 -- accel/accel.sh@21 -- # val= 00:06:52.661 07:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:52.661 07:52:58 -- accel/accel.sh@21 -- # val=software 00:06:52.661 07:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.661 07:52:58 -- accel/accel.sh@23 -- # accel_module=software 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:52.661 07:52:58 -- accel/accel.sh@21 -- # val=32 00:06:52.661 07:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.661 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.662 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:52.662 07:52:58 -- accel/accel.sh@21 -- # val=32 00:06:52.662 07:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.662 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.662 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:52.662 07:52:58 -- accel/accel.sh@21 -- # val=1 00:06:52.662 07:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.662 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.662 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:52.662 07:52:58 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:52.662 07:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.662 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.662 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:52.662 07:52:58 -- accel/accel.sh@21 -- # val=Yes 00:06:52.662 07:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.662 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.662 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:52.662 07:52:58 -- accel/accel.sh@21 -- # val= 00:06:52.662 07:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.662 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.662 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:52.662 07:52:58 -- accel/accel.sh@21 -- # val= 00:06:52.662 07:52:58 -- accel/accel.sh@22 -- # case "$var" in 00:06:52.662 07:52:58 -- accel/accel.sh@20 -- # IFS=: 00:06:52.662 07:52:58 -- accel/accel.sh@20 -- # read -r var val 00:06:54.039 07:52:59 -- accel/accel.sh@21 -- # val= 00:06:54.039 07:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.039 07:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:54.039 07:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:54.039 07:52:59 -- accel/accel.sh@21 -- # val= 00:06:54.039 07:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.039 07:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:54.039 07:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:54.039 07:52:59 -- accel/accel.sh@21 -- # val= 00:06:54.039 07:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.039 07:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:54.039 07:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:54.039 07:52:59 -- accel/accel.sh@21 -- # val= 00:06:54.039 07:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.039 07:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:54.039 07:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:54.039 07:52:59 -- accel/accel.sh@21 -- # val= 00:06:54.039 07:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.039 07:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:54.039 07:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:54.039 07:52:59 -- accel/accel.sh@21 -- # val= 00:06:54.039 07:52:59 -- accel/accel.sh@22 -- # case "$var" in 00:06:54.039 07:52:59 -- accel/accel.sh@20 -- # IFS=: 00:06:54.039 07:52:59 -- accel/accel.sh@20 -- # read -r var val 00:06:54.039 07:52:59 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:54.039 07:52:59 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:06:54.039 07:52:59 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.039 00:06:54.039 real 0m2.644s 00:06:54.039 user 0m2.292s 00:06:54.039 sys 0m0.150s 00:06:54.039 07:52:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.039 ************************************ 00:06:54.039 END TEST accel_copy 00:06:54.039 ************************************ 00:06:54.039 07:52:59 -- common/autotest_common.sh@10 -- # set +x 00:06:54.039 07:52:59 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.039 07:52:59 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:06:54.039 07:52:59 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:54.039 07:52:59 -- common/autotest_common.sh@10 -- # set +x 00:06:54.039 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:06:54.039 ************************************ 00:06:54.039 START TEST accel_fill 00:06:54.039 ************************************ 00:06:54.039 07:52:59 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.039 07:52:59 -- accel/accel.sh@16 -- # local accel_opc 00:06:54.039 07:52:59 -- accel/accel.sh@17 -- # local accel_module 00:06:54.039 07:52:59 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.039 07:52:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:54.039 07:52:59 -- accel/accel.sh@12 -- # build_accel_config 00:06:54.039 07:52:59 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:54.039 07:52:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.039 07:52:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.039 07:52:59 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:54.039 07:52:59 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:54.039 07:52:59 -- accel/accel.sh@41 -- # local IFS=, 00:06:54.039 07:52:59 -- accel/accel.sh@42 -- # jq -r . 00:06:54.039 [2024-07-13 07:52:59.592937] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:54.039 [2024-07-13 07:52:59.593003] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67761 ] 00:06:54.039 [2024-07-13 07:52:59.725527] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.039 [2024-07-13 07:52:59.758212] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.417 07:53:00 -- accel/accel.sh@18 -- # out=' 00:06:55.417 SPDK Configuration: 00:06:55.417 Core mask: 0x1 00:06:55.417 00:06:55.417 Accel Perf Configuration: 00:06:55.417 Workload Type: fill 00:06:55.417 Fill pattern: 0x80 00:06:55.417 Transfer size: 4096 bytes 00:06:55.417 Vector count 1 00:06:55.417 Module: software 00:06:55.417 Queue depth: 64 00:06:55.417 Allocate depth: 64 00:06:55.417 # threads/core: 1 00:06:55.417 Run time: 1 seconds 00:06:55.417 Verify: Yes 00:06:55.417 00:06:55.417 Running for 1 seconds... 00:06:55.417 00:06:55.417 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:55.417 ------------------------------------------------------------------------------------ 00:06:55.417 0,0 513472/s 2005 MiB/s 0 0 00:06:55.417 ==================================================================================== 00:06:55.417 Total 513472/s 2005 MiB/s 0 0' 00:06:55.417 07:53:00 -- accel/accel.sh@20 -- # IFS=: 00:06:55.417 07:53:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.417 07:53:00 -- accel/accel.sh@20 -- # read -r var val 00:06:55.417 07:53:00 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:55.417 07:53:00 -- accel/accel.sh@12 -- # build_accel_config 00:06:55.417 07:53:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:55.417 07:53:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.417 07:53:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.417 07:53:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:55.417 07:53:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:55.417 07:53:00 -- accel/accel.sh@41 -- # local IFS=, 00:06:55.417 07:53:00 -- accel/accel.sh@42 -- # jq -r . 00:06:55.417 [2024-07-13 07:53:00.911543] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:55.417 [2024-07-13 07:53:00.911629] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67774 ] 00:06:55.417 [2024-07-13 07:53:01.045089] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.417 [2024-07-13 07:53:01.076970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.417 07:53:01 -- accel/accel.sh@21 -- # val= 00:06:55.417 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.417 07:53:01 -- accel/accel.sh@21 -- # val= 00:06:55.417 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.417 07:53:01 -- accel/accel.sh@21 -- # val=0x1 00:06:55.417 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.417 07:53:01 -- accel/accel.sh@21 -- # val= 00:06:55.417 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.417 07:53:01 -- accel/accel.sh@21 -- # val= 00:06:55.417 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.417 07:53:01 -- accel/accel.sh@21 -- # val=fill 00:06:55.417 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.417 07:53:01 -- accel/accel.sh@24 -- # accel_opc=fill 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.417 07:53:01 -- accel/accel.sh@21 -- # val=0x80 00:06:55.417 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.417 07:53:01 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:55.417 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.417 07:53:01 -- accel/accel.sh@21 -- # val= 00:06:55.417 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.417 07:53:01 -- accel/accel.sh@21 -- # val=software 00:06:55.417 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.417 07:53:01 -- accel/accel.sh@23 -- # accel_module=software 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.417 07:53:01 -- accel/accel.sh@21 -- # val=64 00:06:55.417 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.417 07:53:01 -- accel/accel.sh@21 -- # val=64 00:06:55.417 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.417 07:53:01 -- accel/accel.sh@21 -- # val=1 00:06:55.417 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.417 07:53:01 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:55.417 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.417 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.418 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.418 07:53:01 -- accel/accel.sh@21 -- # val=Yes 00:06:55.418 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.418 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.418 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.418 07:53:01 -- accel/accel.sh@21 -- # val= 00:06:55.418 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.418 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.418 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:55.418 07:53:01 -- accel/accel.sh@21 -- # val= 00:06:55.418 07:53:01 -- accel/accel.sh@22 -- # case "$var" in 00:06:55.418 07:53:01 -- accel/accel.sh@20 -- # IFS=: 00:06:55.418 07:53:01 -- accel/accel.sh@20 -- # read -r var val 00:06:56.797 07:53:02 -- accel/accel.sh@21 -- # val= 00:06:56.798 07:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.798 07:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:56.798 07:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:56.798 07:53:02 -- accel/accel.sh@21 -- # val= 00:06:56.798 07:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.798 07:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:56.798 07:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:56.798 07:53:02 -- accel/accel.sh@21 -- # val= 00:06:56.798 07:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.798 07:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:56.798 07:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:56.798 07:53:02 -- accel/accel.sh@21 -- # val= 00:06:56.798 07:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.798 07:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:56.798 07:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:56.798 07:53:02 -- accel/accel.sh@21 -- # val= 00:06:56.798 07:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.798 07:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:56.798 07:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:56.798 07:53:02 -- accel/accel.sh@21 -- # val= 00:06:56.798 07:53:02 -- accel/accel.sh@22 -- # case "$var" in 00:06:56.798 07:53:02 -- accel/accel.sh@20 -- # IFS=: 00:06:56.798 07:53:02 -- accel/accel.sh@20 -- # read -r var val 00:06:56.798 07:53:02 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:56.798 07:53:02 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:06:56.798 07:53:02 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:56.798 00:06:56.798 real 0m2.635s 00:06:56.798 user 0m2.293s 00:06:56.798 sys 0m0.139s 00:06:56.798 07:53:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.798 07:53:02 -- common/autotest_common.sh@10 -- # set +x 00:06:56.798 ************************************ 00:06:56.798 END TEST accel_fill 00:06:56.798 ************************************ 00:06:56.798 07:53:02 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:56.798 07:53:02 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:06:56.798 07:53:02 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:56.798 07:53:02 -- common/autotest_common.sh@10 -- # set +x 00:06:56.798 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:06:56.798 ************************************ 00:06:56.798 START TEST accel_copy_crc32c 00:06:56.798 ************************************ 00:06:56.798 07:53:02 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y 00:06:56.798 07:53:02 -- accel/accel.sh@16 -- # local accel_opc 00:06:56.798 07:53:02 -- accel/accel.sh@17 -- # local accel_module 00:06:56.798 07:53:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:56.798 07:53:02 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:56.798 07:53:02 -- accel/accel.sh@12 -- # build_accel_config 00:06:56.798 07:53:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:56.798 07:53:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.798 07:53:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.798 07:53:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:56.798 07:53:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:56.798 07:53:02 -- accel/accel.sh@41 -- # local IFS=, 00:06:56.798 07:53:02 -- accel/accel.sh@42 -- # jq -r . 00:06:56.798 [2024-07-13 07:53:02.282430] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:56.798 [2024-07-13 07:53:02.282527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67797 ] 00:06:56.798 [2024-07-13 07:53:02.419442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.798 [2024-07-13 07:53:02.459326] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.192 07:53:03 -- accel/accel.sh@18 -- # out=' 00:06:58.192 SPDK Configuration: 00:06:58.192 Core mask: 0x1 00:06:58.192 00:06:58.192 Accel Perf Configuration: 00:06:58.192 Workload Type: copy_crc32c 00:06:58.192 CRC-32C seed: 0 00:06:58.192 Vector size: 4096 bytes 00:06:58.192 Transfer size: 4096 bytes 00:06:58.192 Vector count 1 00:06:58.192 Module: software 00:06:58.192 Queue depth: 32 00:06:58.192 Allocate depth: 32 00:06:58.192 # threads/core: 1 00:06:58.192 Run time: 1 seconds 00:06:58.192 Verify: Yes 00:06:58.192 00:06:58.192 Running for 1 seconds... 00:06:58.192 00:06:58.192 Core,Thread Transfers Bandwidth Failed Miscompares 00:06:58.192 ------------------------------------------------------------------------------------ 00:06:58.192 0,0 274688/s 1073 MiB/s 0 0 00:06:58.192 ==================================================================================== 00:06:58.192 Total 274688/s 1073 MiB/s 0 0' 00:06:58.192 07:53:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:58.192 07:53:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.192 07:53:03 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.192 07:53:03 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:58.192 07:53:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.192 07:53:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.192 07:53:03 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:58.192 07:53:03 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:58.192 07:53:03 -- accel/accel.sh@41 -- # local IFS=, 00:06:58.192 07:53:03 -- accel/accel.sh@42 -- # jq -r . 00:06:58.192 [2024-07-13 07:53:03.620869] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:58.192 [2024-07-13 07:53:03.620985] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67805 ] 00:06:58.192 [2024-07-13 07:53:03.765486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.192 [2024-07-13 07:53:03.797646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.192 07:53:03 -- accel/accel.sh@21 -- # val= 00:06:58.192 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.192 07:53:03 -- accel/accel.sh@21 -- # val= 00:06:58.192 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.192 07:53:03 -- accel/accel.sh@21 -- # val=0x1 00:06:58.192 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.192 07:53:03 -- accel/accel.sh@21 -- # val= 00:06:58.192 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.192 07:53:03 -- accel/accel.sh@21 -- # val= 00:06:58.192 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.192 07:53:03 -- accel/accel.sh@21 -- # val=copy_crc32c 00:06:58.192 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.192 07:53:03 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.192 07:53:03 -- accel/accel.sh@21 -- # val=0 00:06:58.192 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.192 07:53:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.192 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.192 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.192 07:53:03 -- accel/accel.sh@21 -- # val='4096 bytes' 00:06:58.193 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.193 07:53:03 -- accel/accel.sh@21 -- # val= 00:06:58.193 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.193 07:53:03 -- accel/accel.sh@21 -- # val=software 00:06:58.193 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.193 07:53:03 -- accel/accel.sh@23 -- # accel_module=software 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.193 07:53:03 -- accel/accel.sh@21 -- # val=32 00:06:58.193 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.193 07:53:03 -- accel/accel.sh@21 -- # val=32 00:06:58.193 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.193 07:53:03 -- accel/accel.sh@21 -- # val=1 00:06:58.193 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.193 07:53:03 -- accel/accel.sh@21 -- # val='1 seconds' 00:06:58.193 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.193 07:53:03 -- accel/accel.sh@21 -- # val=Yes 00:06:58.193 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.193 07:53:03 -- accel/accel.sh@21 -- # val= 00:06:58.193 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:58.193 07:53:03 -- accel/accel.sh@21 -- # val= 00:06:58.193 07:53:03 -- accel/accel.sh@22 -- # case "$var" in 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # IFS=: 00:06:58.193 07:53:03 -- accel/accel.sh@20 -- # read -r var val 00:06:59.139 07:53:04 -- accel/accel.sh@21 -- # val= 00:06:59.139 07:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.139 07:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:59.139 07:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:59.139 07:53:04 -- accel/accel.sh@21 -- # val= 00:06:59.139 07:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.139 07:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:59.139 07:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:59.139 07:53:04 -- accel/accel.sh@21 -- # val= 00:06:59.139 07:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.139 07:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:59.139 07:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:59.139 07:53:04 -- accel/accel.sh@21 -- # val= 00:06:59.139 07:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.139 07:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:59.139 07:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:59.139 07:53:04 -- accel/accel.sh@21 -- # val= 00:06:59.139 07:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.139 07:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:59.139 07:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:59.139 07:53:04 -- accel/accel.sh@21 -- # val= 00:06:59.139 07:53:04 -- accel/accel.sh@22 -- # case "$var" in 00:06:59.139 07:53:04 -- accel/accel.sh@20 -- # IFS=: 00:06:59.139 07:53:04 -- accel/accel.sh@20 -- # read -r var val 00:06:59.139 07:53:04 -- accel/accel.sh@28 -- # [[ -n software ]] 00:06:59.139 07:53:04 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:06:59.139 07:53:04 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.139 00:06:59.139 real 0m2.668s 00:06:59.139 user 0m2.289s 00:06:59.139 sys 0m0.173s 00:06:59.139 07:53:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.139 07:53:04 -- common/autotest_common.sh@10 -- # set +x 00:06:59.139 ************************************ 00:06:59.139 END TEST accel_copy_crc32c 00:06:59.139 ************************************ 00:06:59.399 07:53:04 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:59.399 07:53:04 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:06:59.399 07:53:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:06:59.399 07:53:04 -- common/autotest_common.sh@10 -- # set +x 00:06:59.399 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:06:59.399 ************************************ 00:06:59.399 START TEST accel_copy_crc32c_C2 00:06:59.399 ************************************ 00:06:59.399 07:53:04 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:59.399 07:53:04 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.399 07:53:04 -- accel/accel.sh@17 -- # local accel_module 00:06:59.399 07:53:04 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:59.399 07:53:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:59.399 07:53:04 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.399 07:53:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:06:59.399 07:53:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.399 07:53:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.399 07:53:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:06:59.399 07:53:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:06:59.399 07:53:04 -- accel/accel.sh@41 -- # local IFS=, 00:06:59.399 07:53:04 -- accel/accel.sh@42 -- # jq -r . 00:06:59.399 [2024-07-13 07:53:05.003469] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:06:59.399 [2024-07-13 07:53:05.004033] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67828 ] 00:06:59.399 [2024-07-13 07:53:05.144538] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.399 [2024-07-13 07:53:05.185114] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.794 07:53:06 -- accel/accel.sh@18 -- # out=' 00:07:00.794 SPDK Configuration: 00:07:00.794 Core mask: 0x1 00:07:00.794 00:07:00.794 Accel Perf Configuration: 00:07:00.794 Workload Type: copy_crc32c 00:07:00.794 CRC-32C seed: 0 00:07:00.794 Vector size: 4096 bytes 00:07:00.794 Transfer size: 8192 bytes 00:07:00.794 Vector count 2 00:07:00.794 Module: software 00:07:00.794 Queue depth: 32 00:07:00.794 Allocate depth: 32 00:07:00.794 # threads/core: 1 00:07:00.794 Run time: 1 seconds 00:07:00.794 Verify: Yes 00:07:00.794 00:07:00.794 Running for 1 seconds... 00:07:00.794 00:07:00.794 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:00.794 ------------------------------------------------------------------------------------ 00:07:00.794 0,0 197152/s 1540 MiB/s 0 0 00:07:00.794 ==================================================================================== 00:07:00.794 Total 197152/s 770 MiB/s 0 0' 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:00.794 07:53:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:00.794 07:53:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:00.794 07:53:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:00.794 07:53:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:00.794 07:53:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:00.794 07:53:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:00.794 07:53:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:00.794 07:53:06 -- accel/accel.sh@42 -- # jq -r . 00:07:00.794 [2024-07-13 07:53:06.337900] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:00.794 [2024-07-13 07:53:06.337980] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67841 ] 00:07:00.794 [2024-07-13 07:53:06.468060] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.794 [2024-07-13 07:53:06.498873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val= 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val= 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val=0x1 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val= 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val= 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val=0 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val= 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val=software 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val=32 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val=32 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val=1 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val=Yes 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val= 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:00.794 07:53:06 -- accel/accel.sh@21 -- # val= 00:07:00.794 07:53:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # IFS=: 00:07:00.794 07:53:06 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 07:53:07 -- accel/accel.sh@21 -- # val= 00:07:02.172 07:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 07:53:07 -- accel/accel.sh@20 -- # IFS=: 00:07:02.172 07:53:07 -- accel/accel.sh@20 -- # read -r var val 00:07:02.172 07:53:07 -- accel/accel.sh@21 -- # val= 00:07:02.172 07:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.172 07:53:07 -- accel/accel.sh@20 -- # IFS=: 00:07:02.173 07:53:07 -- accel/accel.sh@20 -- # read -r var val 00:07:02.173 07:53:07 -- accel/accel.sh@21 -- # val= 00:07:02.173 07:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.173 07:53:07 -- accel/accel.sh@20 -- # IFS=: 00:07:02.173 07:53:07 -- accel/accel.sh@20 -- # read -r var val 00:07:02.173 07:53:07 -- accel/accel.sh@21 -- # val= 00:07:02.173 07:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.173 07:53:07 -- accel/accel.sh@20 -- # IFS=: 00:07:02.173 07:53:07 -- accel/accel.sh@20 -- # read -r var val 00:07:02.173 07:53:07 -- accel/accel.sh@21 -- # val= 00:07:02.173 07:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.173 07:53:07 -- accel/accel.sh@20 -- # IFS=: 00:07:02.173 07:53:07 -- accel/accel.sh@20 -- # read -r var val 00:07:02.173 07:53:07 -- accel/accel.sh@21 -- # val= 00:07:02.173 07:53:07 -- accel/accel.sh@22 -- # case "$var" in 00:07:02.173 07:53:07 -- accel/accel.sh@20 -- # IFS=: 00:07:02.173 07:53:07 -- accel/accel.sh@20 -- # read -r var val 00:07:02.173 07:53:07 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:02.173 07:53:07 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:02.173 07:53:07 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.173 00:07:02.173 real 0m2.655s 00:07:02.173 user 0m2.298s 00:07:02.173 sys 0m0.149s 00:07:02.173 07:53:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.173 ************************************ 00:07:02.173 07:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:02.173 END TEST accel_copy_crc32c_C2 00:07:02.173 ************************************ 00:07:02.173 07:53:07 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:02.173 07:53:07 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:02.173 07:53:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:02.173 07:53:07 -- common/autotest_common.sh@10 -- # set +x 00:07:02.173 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:02.173 ************************************ 00:07:02.173 START TEST accel_dualcast 00:07:02.173 ************************************ 00:07:02.173 07:53:07 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dualcast -y 00:07:02.173 07:53:07 -- accel/accel.sh@16 -- # local accel_opc 00:07:02.173 07:53:07 -- accel/accel.sh@17 -- # local accel_module 00:07:02.173 07:53:07 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:02.173 07:53:07 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:02.173 07:53:07 -- accel/accel.sh@12 -- # build_accel_config 00:07:02.173 07:53:07 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:02.173 07:53:07 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.173 07:53:07 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.173 07:53:07 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:02.173 07:53:07 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:02.173 07:53:07 -- accel/accel.sh@41 -- # local IFS=, 00:07:02.173 07:53:07 -- accel/accel.sh@42 -- # jq -r . 00:07:02.173 [2024-07-13 07:53:07.703045] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:02.173 [2024-07-13 07:53:07.703727] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67864 ] 00:07:02.173 [2024-07-13 07:53:07.837933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.173 [2024-07-13 07:53:07.869198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.552 07:53:08 -- accel/accel.sh@18 -- # out=' 00:07:03.552 SPDK Configuration: 00:07:03.552 Core mask: 0x1 00:07:03.552 00:07:03.552 Accel Perf Configuration: 00:07:03.552 Workload Type: dualcast 00:07:03.552 Transfer size: 4096 bytes 00:07:03.552 Vector count 1 00:07:03.552 Module: software 00:07:03.552 Queue depth: 32 00:07:03.552 Allocate depth: 32 00:07:03.552 # threads/core: 1 00:07:03.552 Run time: 1 seconds 00:07:03.552 Verify: Yes 00:07:03.552 00:07:03.552 Running for 1 seconds... 00:07:03.552 00:07:03.552 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:03.552 ------------------------------------------------------------------------------------ 00:07:03.552 0,0 389408/s 1521 MiB/s 0 0 00:07:03.552 ==================================================================================== 00:07:03.552 Total 389408/s 1521 MiB/s 0 0' 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.552 07:53:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.552 07:53:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.552 07:53:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:03.552 07:53:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:03.552 07:53:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.552 07:53:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.552 07:53:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:03.552 07:53:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:03.552 07:53:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:03.552 07:53:09 -- accel/accel.sh@42 -- # jq -r . 00:07:03.552 [2024-07-13 07:53:09.025026] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:03.552 [2024-07-13 07:53:09.025639] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67872 ] 00:07:03.552 [2024-07-13 07:53:09.160982] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.552 [2024-07-13 07:53:09.194223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.552 07:53:09 -- accel/accel.sh@21 -- # val= 00:07:03.552 07:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.552 07:53:09 -- accel/accel.sh@21 -- # val= 00:07:03.552 07:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.552 07:53:09 -- accel/accel.sh@21 -- # val=0x1 00:07:03.552 07:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.552 07:53:09 -- accel/accel.sh@21 -- # val= 00:07:03.552 07:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.552 07:53:09 -- accel/accel.sh@21 -- # val= 00:07:03.552 07:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.552 07:53:09 -- accel/accel.sh@21 -- # val=dualcast 00:07:03.552 07:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.552 07:53:09 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.552 07:53:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:03.552 07:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.552 07:53:09 -- accel/accel.sh@21 -- # val= 00:07:03.552 07:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.552 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.552 07:53:09 -- accel/accel.sh@21 -- # val=software 00:07:03.552 07:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.553 07:53:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:03.553 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.553 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.553 07:53:09 -- accel/accel.sh@21 -- # val=32 00:07:03.553 07:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.553 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.553 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.553 07:53:09 -- accel/accel.sh@21 -- # val=32 00:07:03.553 07:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.553 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.553 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.553 07:53:09 -- accel/accel.sh@21 -- # val=1 00:07:03.553 07:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.553 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.553 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.553 07:53:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:03.553 07:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.553 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.553 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.553 07:53:09 -- accel/accel.sh@21 -- # val=Yes 00:07:03.553 07:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.553 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.553 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.553 07:53:09 -- accel/accel.sh@21 -- # val= 00:07:03.553 07:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.553 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.553 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:03.553 07:53:09 -- accel/accel.sh@21 -- # val= 00:07:03.553 07:53:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:03.553 07:53:09 -- accel/accel.sh@20 -- # IFS=: 00:07:03.553 07:53:09 -- accel/accel.sh@20 -- # read -r var val 00:07:04.929 07:53:10 -- accel/accel.sh@21 -- # val= 00:07:04.929 07:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.929 07:53:10 -- accel/accel.sh@20 -- # IFS=: 00:07:04.929 07:53:10 -- accel/accel.sh@20 -- # read -r var val 00:07:04.929 07:53:10 -- accel/accel.sh@21 -- # val= 00:07:04.929 07:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.929 07:53:10 -- accel/accel.sh@20 -- # IFS=: 00:07:04.929 07:53:10 -- accel/accel.sh@20 -- # read -r var val 00:07:04.929 07:53:10 -- accel/accel.sh@21 -- # val= 00:07:04.929 07:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.929 07:53:10 -- accel/accel.sh@20 -- # IFS=: 00:07:04.929 07:53:10 -- accel/accel.sh@20 -- # read -r var val 00:07:04.929 07:53:10 -- accel/accel.sh@21 -- # val= 00:07:04.929 07:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.929 07:53:10 -- accel/accel.sh@20 -- # IFS=: 00:07:04.929 07:53:10 -- accel/accel.sh@20 -- # read -r var val 00:07:04.929 07:53:10 -- accel/accel.sh@21 -- # val= 00:07:04.929 07:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.929 07:53:10 -- accel/accel.sh@20 -- # IFS=: 00:07:04.929 07:53:10 -- accel/accel.sh@20 -- # read -r var val 00:07:04.929 ************************************ 00:07:04.929 END TEST accel_dualcast 00:07:04.929 ************************************ 00:07:04.929 07:53:10 -- accel/accel.sh@21 -- # val= 00:07:04.929 07:53:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:04.930 07:53:10 -- accel/accel.sh@20 -- # IFS=: 00:07:04.930 07:53:10 -- accel/accel.sh@20 -- # read -r var val 00:07:04.930 07:53:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:04.930 07:53:10 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:04.930 07:53:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.930 00:07:04.930 real 0m2.646s 00:07:04.930 user 0m2.297s 00:07:04.930 sys 0m0.143s 00:07:04.930 07:53:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.930 07:53:10 -- common/autotest_common.sh@10 -- # set +x 00:07:04.930 07:53:10 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:04.930 07:53:10 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:04.930 07:53:10 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:04.930 07:53:10 -- common/autotest_common.sh@10 -- # set +x 00:07:04.930 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:04.930 ************************************ 00:07:04.930 START TEST accel_compare 00:07:04.930 ************************************ 00:07:04.930 07:53:10 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compare -y 00:07:04.930 07:53:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:04.930 07:53:10 -- accel/accel.sh@17 -- # local accel_module 00:07:04.930 07:53:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:04.930 07:53:10 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:04.930 07:53:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:04.930 07:53:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:04.930 07:53:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.930 07:53:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.930 07:53:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:04.930 07:53:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:04.930 07:53:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:04.930 07:53:10 -- accel/accel.sh@42 -- # jq -r . 00:07:04.930 [2024-07-13 07:53:10.400916] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:04.930 [2024-07-13 07:53:10.400999] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67900 ] 00:07:04.930 [2024-07-13 07:53:10.540136] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:04.930 [2024-07-13 07:53:10.578197] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.309 07:53:11 -- accel/accel.sh@18 -- # out=' 00:07:06.309 SPDK Configuration: 00:07:06.309 Core mask: 0x1 00:07:06.309 00:07:06.309 Accel Perf Configuration: 00:07:06.309 Workload Type: compare 00:07:06.309 Transfer size: 4096 bytes 00:07:06.309 Vector count 1 00:07:06.309 Module: software 00:07:06.309 Queue depth: 32 00:07:06.309 Allocate depth: 32 00:07:06.309 # threads/core: 1 00:07:06.309 Run time: 1 seconds 00:07:06.309 Verify: Yes 00:07:06.309 00:07:06.309 Running for 1 seconds... 00:07:06.309 00:07:06.309 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:06.309 ------------------------------------------------------------------------------------ 00:07:06.309 0,0 476672/s 1862 MiB/s 0 0 00:07:06.309 ==================================================================================== 00:07:06.309 Total 476672/s 1862 MiB/s 0 0' 00:07:06.309 07:53:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 07:53:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:06.309 07:53:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:06.309 07:53:11 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:06.309 07:53:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.309 07:53:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.309 07:53:11 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:06.309 07:53:11 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:06.309 07:53:11 -- accel/accel.sh@41 -- # local IFS=, 00:07:06.309 07:53:11 -- accel/accel.sh@42 -- # jq -r . 00:07:06.309 [2024-07-13 07:53:11.725889] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:06.309 [2024-07-13 07:53:11.725991] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67908 ] 00:07:06.309 [2024-07-13 07:53:11.861586] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.309 [2024-07-13 07:53:11.893621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.309 07:53:11 -- accel/accel.sh@21 -- # val= 00:07:06.309 07:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 07:53:11 -- accel/accel.sh@21 -- # val= 00:07:06.309 07:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 07:53:11 -- accel/accel.sh@21 -- # val=0x1 00:07:06.309 07:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 07:53:11 -- accel/accel.sh@21 -- # val= 00:07:06.309 07:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 07:53:11 -- accel/accel.sh@21 -- # val= 00:07:06.309 07:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 07:53:11 -- accel/accel.sh@21 -- # val=compare 00:07:06.309 07:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 07:53:11 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 07:53:11 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:06.309 07:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 07:53:11 -- accel/accel.sh@21 -- # val= 00:07:06.309 07:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 07:53:11 -- accel/accel.sh@21 -- # val=software 00:07:06.309 07:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 07:53:11 -- accel/accel.sh@23 -- # accel_module=software 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 07:53:11 -- accel/accel.sh@21 -- # val=32 00:07:06.309 07:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 07:53:11 -- accel/accel.sh@21 -- # val=32 00:07:06.309 07:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 07:53:11 -- accel/accel.sh@21 -- # val=1 00:07:06.309 07:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 07:53:11 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:06.309 07:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 07:53:11 -- accel/accel.sh@21 -- # val=Yes 00:07:06.309 07:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 07:53:11 -- accel/accel.sh@21 -- # val= 00:07:06.309 07:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:06.309 07:53:11 -- accel/accel.sh@21 -- # val= 00:07:06.309 07:53:11 -- accel/accel.sh@22 -- # case "$var" in 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # IFS=: 00:07:06.309 07:53:11 -- accel/accel.sh@20 -- # read -r var val 00:07:07.245 07:53:13 -- accel/accel.sh@21 -- # val= 00:07:07.245 07:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.245 07:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:07.245 07:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:07.245 07:53:13 -- accel/accel.sh@21 -- # val= 00:07:07.245 07:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.246 07:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:07.246 07:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:07.246 07:53:13 -- accel/accel.sh@21 -- # val= 00:07:07.246 07:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.246 07:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:07.246 07:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:07.246 07:53:13 -- accel/accel.sh@21 -- # val= 00:07:07.246 07:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.246 07:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:07.246 07:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:07.246 07:53:13 -- accel/accel.sh@21 -- # val= 00:07:07.246 ************************************ 00:07:07.246 END TEST accel_compare 00:07:07.246 ************************************ 00:07:07.246 07:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.246 07:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:07.246 07:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:07.246 07:53:13 -- accel/accel.sh@21 -- # val= 00:07:07.246 07:53:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:07.246 07:53:13 -- accel/accel.sh@20 -- # IFS=: 00:07:07.246 07:53:13 -- accel/accel.sh@20 -- # read -r var val 00:07:07.246 07:53:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:07.246 07:53:13 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:07.246 07:53:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:07.246 00:07:07.246 real 0m2.637s 00:07:07.246 user 0m2.294s 00:07:07.246 sys 0m0.143s 00:07:07.246 07:53:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:07.246 07:53:13 -- common/autotest_common.sh@10 -- # set +x 00:07:07.246 07:53:13 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:07.246 07:53:13 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:07:07.246 07:53:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:07.246 07:53:13 -- common/autotest_common.sh@10 -- # set +x 00:07:07.504 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:07.504 ************************************ 00:07:07.504 START TEST accel_xor 00:07:07.504 ************************************ 00:07:07.504 07:53:13 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y 00:07:07.504 07:53:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:07.504 07:53:13 -- accel/accel.sh@17 -- # local accel_module 00:07:07.504 07:53:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:07.504 07:53:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:07.504 07:53:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:07.504 07:53:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:07.504 07:53:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:07.504 07:53:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:07.504 07:53:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:07.504 07:53:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:07.504 07:53:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:07.504 07:53:13 -- accel/accel.sh@42 -- # jq -r . 00:07:07.504 [2024-07-13 07:53:13.083846] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:07.504 [2024-07-13 07:53:13.084078] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67931 ] 00:07:07.504 [2024-07-13 07:53:13.215241] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.504 [2024-07-13 07:53:13.247223] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.881 07:53:14 -- accel/accel.sh@18 -- # out=' 00:07:08.881 SPDK Configuration: 00:07:08.881 Core mask: 0x1 00:07:08.881 00:07:08.881 Accel Perf Configuration: 00:07:08.881 Workload Type: xor 00:07:08.881 Source buffers: 2 00:07:08.881 Transfer size: 4096 bytes 00:07:08.881 Vector count 1 00:07:08.881 Module: software 00:07:08.881 Queue depth: 32 00:07:08.881 Allocate depth: 32 00:07:08.881 # threads/core: 1 00:07:08.881 Run time: 1 seconds 00:07:08.881 Verify: Yes 00:07:08.881 00:07:08.881 Running for 1 seconds... 00:07:08.881 00:07:08.881 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:08.881 ------------------------------------------------------------------------------------ 00:07:08.881 0,0 268640/s 1049 MiB/s 0 0 00:07:08.881 ==================================================================================== 00:07:08.881 Total 268640/s 1049 MiB/s 0 0' 00:07:08.881 07:53:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:08.881 07:53:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:08.881 07:53:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:08.881 07:53:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.881 07:53:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.881 07:53:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:08.881 07:53:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:08.881 07:53:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:08.881 07:53:14 -- accel/accel.sh@42 -- # jq -r . 00:07:08.881 [2024-07-13 07:53:14.384453] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:08.881 [2024-07-13 07:53:14.384524] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67941 ] 00:07:08.881 [2024-07-13 07:53:14.517951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.881 [2024-07-13 07:53:14.548962] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val= 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val= 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val=0x1 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val= 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val= 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val=xor 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val=2 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val= 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val=software 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val=32 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val=32 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val=1 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val=Yes 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val= 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:08.881 07:53:14 -- accel/accel.sh@21 -- # val= 00:07:08.881 07:53:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # IFS=: 00:07:08.881 07:53:14 -- accel/accel.sh@20 -- # read -r var val 00:07:10.258 07:53:15 -- accel/accel.sh@21 -- # val= 00:07:10.258 07:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.258 07:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:10.258 07:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:10.258 07:53:15 -- accel/accel.sh@21 -- # val= 00:07:10.258 07:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.258 07:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:10.258 07:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:10.258 07:53:15 -- accel/accel.sh@21 -- # val= 00:07:10.258 07:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.258 07:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:10.258 07:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:10.258 07:53:15 -- accel/accel.sh@21 -- # val= 00:07:10.258 07:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.258 07:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:10.258 07:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:10.258 07:53:15 -- accel/accel.sh@21 -- # val= 00:07:10.258 07:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.258 07:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:10.258 07:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:10.258 07:53:15 -- accel/accel.sh@21 -- # val= 00:07:10.258 07:53:15 -- accel/accel.sh@22 -- # case "$var" in 00:07:10.258 07:53:15 -- accel/accel.sh@20 -- # IFS=: 00:07:10.258 07:53:15 -- accel/accel.sh@20 -- # read -r var val 00:07:10.258 07:53:15 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:10.258 07:53:15 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:10.258 07:53:15 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:10.258 00:07:10.258 real 0m2.611s 00:07:10.258 user 0m2.288s 00:07:10.258 sys 0m0.124s 00:07:10.258 07:53:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.258 ************************************ 00:07:10.258 END TEST accel_xor 00:07:10.258 ************************************ 00:07:10.258 07:53:15 -- common/autotest_common.sh@10 -- # set +x 00:07:10.258 07:53:15 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:10.258 07:53:15 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:10.258 07:53:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:10.258 07:53:15 -- common/autotest_common.sh@10 -- # set +x 00:07:10.258 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:10.258 ************************************ 00:07:10.258 START TEST accel_xor 00:07:10.258 ************************************ 00:07:10.258 07:53:15 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w xor -y -x 3 00:07:10.258 07:53:15 -- accel/accel.sh@16 -- # local accel_opc 00:07:10.258 07:53:15 -- accel/accel.sh@17 -- # local accel_module 00:07:10.258 07:53:15 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:10.258 07:53:15 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:10.258 07:53:15 -- accel/accel.sh@12 -- # build_accel_config 00:07:10.258 07:53:15 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:10.258 07:53:15 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:10.258 07:53:15 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:10.258 07:53:15 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:10.258 07:53:15 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:10.258 07:53:15 -- accel/accel.sh@41 -- # local IFS=, 00:07:10.258 07:53:15 -- accel/accel.sh@42 -- # jq -r . 00:07:10.258 [2024-07-13 07:53:15.750403] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:10.258 [2024-07-13 07:53:15.750486] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67969 ] 00:07:10.258 [2024-07-13 07:53:15.886916] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.258 [2024-07-13 07:53:15.919473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.634 07:53:17 -- accel/accel.sh@18 -- # out=' 00:07:11.634 SPDK Configuration: 00:07:11.634 Core mask: 0x1 00:07:11.634 00:07:11.634 Accel Perf Configuration: 00:07:11.634 Workload Type: xor 00:07:11.634 Source buffers: 3 00:07:11.634 Transfer size: 4096 bytes 00:07:11.634 Vector count 1 00:07:11.634 Module: software 00:07:11.634 Queue depth: 32 00:07:11.634 Allocate depth: 32 00:07:11.634 # threads/core: 1 00:07:11.634 Run time: 1 seconds 00:07:11.634 Verify: Yes 00:07:11.634 00:07:11.634 Running for 1 seconds... 00:07:11.634 00:07:11.634 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:11.634 ------------------------------------------------------------------------------------ 00:07:11.634 0,0 241344/s 942 MiB/s 0 0 00:07:11.634 ==================================================================================== 00:07:11.634 Total 241344/s 942 MiB/s 0 0' 00:07:11.634 07:53:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:11.634 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.634 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.634 07:53:17 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:11.634 07:53:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:11.634 07:53:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:11.634 07:53:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.634 07:53:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.634 07:53:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:11.634 07:53:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:11.634 07:53:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:11.634 07:53:17 -- accel/accel.sh@42 -- # jq -r . 00:07:11.634 [2024-07-13 07:53:17.064891] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:11.634 [2024-07-13 07:53:17.064994] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67977 ] 00:07:11.634 [2024-07-13 07:53:17.196338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.634 [2024-07-13 07:53:17.228459] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.634 07:53:17 -- accel/accel.sh@21 -- # val= 00:07:11.634 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.634 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.634 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.634 07:53:17 -- accel/accel.sh@21 -- # val= 00:07:11.634 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.634 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.634 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.634 07:53:17 -- accel/accel.sh@21 -- # val=0x1 00:07:11.634 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.634 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.634 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.634 07:53:17 -- accel/accel.sh@21 -- # val= 00:07:11.634 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.634 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.634 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.634 07:53:17 -- accel/accel.sh@21 -- # val= 00:07:11.634 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.634 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.634 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.634 07:53:17 -- accel/accel.sh@21 -- # val=xor 00:07:11.634 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.634 07:53:17 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:11.634 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.634 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.635 07:53:17 -- accel/accel.sh@21 -- # val=3 00:07:11.635 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.635 07:53:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:11.635 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.635 07:53:17 -- accel/accel.sh@21 -- # val= 00:07:11.635 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.635 07:53:17 -- accel/accel.sh@21 -- # val=software 00:07:11.635 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.635 07:53:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.635 07:53:17 -- accel/accel.sh@21 -- # val=32 00:07:11.635 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.635 07:53:17 -- accel/accel.sh@21 -- # val=32 00:07:11.635 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.635 07:53:17 -- accel/accel.sh@21 -- # val=1 00:07:11.635 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.635 07:53:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:11.635 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.635 07:53:17 -- accel/accel.sh@21 -- # val=Yes 00:07:11.635 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.635 07:53:17 -- accel/accel.sh@21 -- # val= 00:07:11.635 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:11.635 07:53:17 -- accel/accel.sh@21 -- # val= 00:07:11.635 07:53:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # IFS=: 00:07:11.635 07:53:17 -- accel/accel.sh@20 -- # read -r var val 00:07:12.569 07:53:18 -- accel/accel.sh@21 -- # val= 00:07:12.569 07:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.569 07:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:12.569 07:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:12.569 07:53:18 -- accel/accel.sh@21 -- # val= 00:07:12.569 07:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.569 07:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:12.569 07:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:12.569 07:53:18 -- accel/accel.sh@21 -- # val= 00:07:12.569 07:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.569 07:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:12.569 07:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:12.569 07:53:18 -- accel/accel.sh@21 -- # val= 00:07:12.569 07:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.569 07:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:12.569 07:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:12.569 ************************************ 00:07:12.569 END TEST accel_xor 00:07:12.569 ************************************ 00:07:12.569 07:53:18 -- accel/accel.sh@21 -- # val= 00:07:12.569 07:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.569 07:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:12.569 07:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:12.569 07:53:18 -- accel/accel.sh@21 -- # val= 00:07:12.569 07:53:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:12.569 07:53:18 -- accel/accel.sh@20 -- # IFS=: 00:07:12.569 07:53:18 -- accel/accel.sh@20 -- # read -r var val 00:07:12.569 07:53:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:12.569 07:53:18 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:12.569 07:53:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.569 00:07:12.569 real 0m2.613s 00:07:12.570 user 0m2.278s 00:07:12.570 sys 0m0.137s 00:07:12.570 07:53:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.570 07:53:18 -- common/autotest_common.sh@10 -- # set +x 00:07:12.570 07:53:18 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:12.570 07:53:18 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:12.570 07:53:18 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:12.829 07:53:18 -- common/autotest_common.sh@10 -- # set +x 00:07:12.829 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:12.829 ************************************ 00:07:12.829 START TEST accel_dif_verify 00:07:12.829 ************************************ 00:07:12.829 07:53:18 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_verify 00:07:12.829 07:53:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:12.829 07:53:18 -- accel/accel.sh@17 -- # local accel_module 00:07:12.829 07:53:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:12.829 07:53:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:12.829 07:53:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:12.829 07:53:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:12.829 07:53:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.829 07:53:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.829 07:53:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:12.829 07:53:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:12.829 07:53:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:12.829 07:53:18 -- accel/accel.sh@42 -- # jq -r . 00:07:12.829 [2024-07-13 07:53:18.420655] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:12.829 [2024-07-13 07:53:18.420748] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68001 ] 00:07:12.829 [2024-07-13 07:53:18.558639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.829 [2024-07-13 07:53:18.588992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.240 07:53:19 -- accel/accel.sh@18 -- # out=' 00:07:14.240 SPDK Configuration: 00:07:14.240 Core mask: 0x1 00:07:14.240 00:07:14.240 Accel Perf Configuration: 00:07:14.240 Workload Type: dif_verify 00:07:14.240 Vector size: 4096 bytes 00:07:14.240 Transfer size: 4096 bytes 00:07:14.240 Block size: 512 bytes 00:07:14.240 Metadata size: 8 bytes 00:07:14.240 Vector count 1 00:07:14.240 Module: software 00:07:14.240 Queue depth: 32 00:07:14.240 Allocate depth: 32 00:07:14.240 # threads/core: 1 00:07:14.240 Run time: 1 seconds 00:07:14.240 Verify: No 00:07:14.240 00:07:14.240 Running for 1 seconds... 00:07:14.240 00:07:14.240 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:14.240 ------------------------------------------------------------------------------------ 00:07:14.240 0,0 110880/s 439 MiB/s 0 0 00:07:14.240 ==================================================================================== 00:07:14.240 Total 110880/s 433 MiB/s 0 0' 00:07:14.240 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.240 07:53:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:14.240 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.240 07:53:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:14.240 07:53:19 -- accel/accel.sh@12 -- # build_accel_config 00:07:14.240 07:53:19 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:14.240 07:53:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.241 07:53:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.241 07:53:19 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:14.241 07:53:19 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:14.241 07:53:19 -- accel/accel.sh@41 -- # local IFS=, 00:07:14.241 07:53:19 -- accel/accel.sh@42 -- # jq -r . 00:07:14.241 [2024-07-13 07:53:19.740804] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:14.241 [2024-07-13 07:53:19.740897] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68014 ] 00:07:14.241 [2024-07-13 07:53:19.877616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.241 [2024-07-13 07:53:19.907997] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val= 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val= 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val=0x1 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val= 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val= 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val=dif_verify 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val= 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val=software 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@23 -- # accel_module=software 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val=32 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val=32 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val=1 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val=No 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val= 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:14.241 07:53:19 -- accel/accel.sh@21 -- # val= 00:07:14.241 07:53:19 -- accel/accel.sh@22 -- # case "$var" in 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # IFS=: 00:07:14.241 07:53:19 -- accel/accel.sh@20 -- # read -r var val 00:07:15.619 07:53:21 -- accel/accel.sh@21 -- # val= 00:07:15.619 07:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.619 07:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:15.619 07:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:15.619 07:53:21 -- accel/accel.sh@21 -- # val= 00:07:15.619 07:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.619 07:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:15.619 07:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:15.619 07:53:21 -- accel/accel.sh@21 -- # val= 00:07:15.619 07:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.619 07:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:15.619 07:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:15.619 07:53:21 -- accel/accel.sh@21 -- # val= 00:07:15.619 07:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.619 07:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:15.619 07:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:15.619 07:53:21 -- accel/accel.sh@21 -- # val= 00:07:15.619 07:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.619 07:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:15.619 07:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:15.619 07:53:21 -- accel/accel.sh@21 -- # val= 00:07:15.619 07:53:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:15.619 07:53:21 -- accel/accel.sh@20 -- # IFS=: 00:07:15.619 07:53:21 -- accel/accel.sh@20 -- # read -r var val 00:07:15.619 07:53:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:15.619 07:53:21 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:07:15.619 07:53:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.619 00:07:15.619 real 0m2.638s 00:07:15.619 user 0m2.310s 00:07:15.619 sys 0m0.131s 00:07:15.619 07:53:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.619 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:15.619 ************************************ 00:07:15.619 END TEST accel_dif_verify 00:07:15.619 ************************************ 00:07:15.619 07:53:21 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:15.619 07:53:21 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:15.619 07:53:21 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:15.619 07:53:21 -- common/autotest_common.sh@10 -- # set +x 00:07:15.619 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:15.619 ************************************ 00:07:15.619 START TEST accel_dif_generate 00:07:15.619 ************************************ 00:07:15.619 07:53:21 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate 00:07:15.619 07:53:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:15.619 07:53:21 -- accel/accel.sh@17 -- # local accel_module 00:07:15.619 07:53:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:07:15.619 07:53:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:15.619 07:53:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:15.619 07:53:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:15.619 07:53:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.619 07:53:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.619 07:53:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:15.620 07:53:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:15.620 07:53:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:15.620 07:53:21 -- accel/accel.sh@42 -- # jq -r . 00:07:15.620 [2024-07-13 07:53:21.103278] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:15.620 [2024-07-13 07:53:21.103380] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68038 ] 00:07:15.620 [2024-07-13 07:53:21.238265] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.620 [2024-07-13 07:53:21.268602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.998 07:53:22 -- accel/accel.sh@18 -- # out=' 00:07:16.999 SPDK Configuration: 00:07:16.999 Core mask: 0x1 00:07:16.999 00:07:16.999 Accel Perf Configuration: 00:07:16.999 Workload Type: dif_generate 00:07:16.999 Vector size: 4096 bytes 00:07:16.999 Transfer size: 4096 bytes 00:07:16.999 Block size: 512 bytes 00:07:16.999 Metadata size: 8 bytes 00:07:16.999 Vector count 1 00:07:16.999 Module: software 00:07:16.999 Queue depth: 32 00:07:16.999 Allocate depth: 32 00:07:16.999 # threads/core: 1 00:07:16.999 Run time: 1 seconds 00:07:16.999 Verify: No 00:07:16.999 00:07:16.999 Running for 1 seconds... 00:07:16.999 00:07:16.999 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:16.999 ------------------------------------------------------------------------------------ 00:07:16.999 0,0 131936/s 523 MiB/s 0 0 00:07:16.999 ==================================================================================== 00:07:16.999 Total 131936/s 515 MiB/s 0 0' 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:16.999 07:53:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:16.999 07:53:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:16.999 07:53:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:16.999 07:53:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.999 07:53:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.999 07:53:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:16.999 07:53:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:16.999 07:53:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:16.999 07:53:22 -- accel/accel.sh@42 -- # jq -r . 00:07:16.999 [2024-07-13 07:53:22.405387] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:16.999 [2024-07-13 07:53:22.405474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68046 ] 00:07:16.999 [2024-07-13 07:53:22.533144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.999 [2024-07-13 07:53:22.563140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val= 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val= 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val=0x1 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val= 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val= 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val=dif_generate 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val='512 bytes' 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val='8 bytes' 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val= 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val=software 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@23 -- # accel_module=software 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val=32 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val=32 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val=1 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val=No 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val= 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:16.999 07:53:22 -- accel/accel.sh@21 -- # val= 00:07:16.999 07:53:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # IFS=: 00:07:16.999 07:53:22 -- accel/accel.sh@20 -- # read -r var val 00:07:17.934 07:53:23 -- accel/accel.sh@21 -- # val= 00:07:17.934 07:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.934 07:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.934 07:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.934 07:53:23 -- accel/accel.sh@21 -- # val= 00:07:17.934 07:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.934 07:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.934 07:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.934 07:53:23 -- accel/accel.sh@21 -- # val= 00:07:17.934 07:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.934 07:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.934 07:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.934 07:53:23 -- accel/accel.sh@21 -- # val= 00:07:17.934 07:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.934 07:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.934 07:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.934 07:53:23 -- accel/accel.sh@21 -- # val= 00:07:17.934 07:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.934 07:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.934 07:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.934 07:53:23 -- accel/accel.sh@21 -- # val= 00:07:17.934 07:53:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:17.934 07:53:23 -- accel/accel.sh@20 -- # IFS=: 00:07:17.934 07:53:23 -- accel/accel.sh@20 -- # read -r var val 00:07:17.934 07:53:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:17.934 07:53:23 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:07:17.934 07:53:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:17.934 00:07:17.934 real 0m2.623s 00:07:17.934 user 0m2.291s 00:07:17.934 sys 0m0.137s 00:07:17.934 07:53:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:17.934 07:53:23 -- common/autotest_common.sh@10 -- # set +x 00:07:17.934 ************************************ 00:07:17.934 END TEST accel_dif_generate 00:07:17.934 ************************************ 00:07:17.934 07:53:23 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:17.934 07:53:23 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:07:17.934 07:53:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:17.934 07:53:23 -- common/autotest_common.sh@10 -- # set +x 00:07:18.192 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:18.192 ************************************ 00:07:18.192 START TEST accel_dif_generate_copy 00:07:18.192 ************************************ 00:07:18.192 07:53:23 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w dif_generate_copy 00:07:18.192 07:53:23 -- accel/accel.sh@16 -- # local accel_opc 00:07:18.192 07:53:23 -- accel/accel.sh@17 -- # local accel_module 00:07:18.192 07:53:23 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:07:18.192 07:53:23 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:18.192 07:53:23 -- accel/accel.sh@12 -- # build_accel_config 00:07:18.192 07:53:23 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:18.192 07:53:23 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.192 07:53:23 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.192 07:53:23 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:18.192 07:53:23 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:18.193 07:53:23 -- accel/accel.sh@41 -- # local IFS=, 00:07:18.193 07:53:23 -- accel/accel.sh@42 -- # jq -r . 00:07:18.193 [2024-07-13 07:53:23.774928] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:18.193 [2024-07-13 07:53:23.775018] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68069 ] 00:07:18.193 [2024-07-13 07:53:23.903442] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.193 [2024-07-13 07:53:23.937613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.566 07:53:25 -- accel/accel.sh@18 -- # out=' 00:07:19.566 SPDK Configuration: 00:07:19.566 Core mask: 0x1 00:07:19.566 00:07:19.566 Accel Perf Configuration: 00:07:19.566 Workload Type: dif_generate_copy 00:07:19.566 Vector size: 4096 bytes 00:07:19.566 Transfer size: 4096 bytes 00:07:19.566 Vector count 1 00:07:19.566 Module: software 00:07:19.566 Queue depth: 32 00:07:19.566 Allocate depth: 32 00:07:19.566 # threads/core: 1 00:07:19.566 Run time: 1 seconds 00:07:19.566 Verify: No 00:07:19.566 00:07:19.566 Running for 1 seconds... 00:07:19.566 00:07:19.566 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:19.566 ------------------------------------------------------------------------------------ 00:07:19.566 0,0 109824/s 435 MiB/s 0 0 00:07:19.566 ==================================================================================== 00:07:19.566 Total 109824/s 429 MiB/s 0 0' 00:07:19.566 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.566 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.566 07:53:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:19.566 07:53:25 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:19.566 07:53:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.566 07:53:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:19.566 07:53:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.566 07:53:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.566 07:53:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:19.566 07:53:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:19.566 07:53:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:19.566 07:53:25 -- accel/accel.sh@42 -- # jq -r . 00:07:19.566 [2024-07-13 07:53:25.107794] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:19.566 [2024-07-13 07:53:25.107873] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68082 ] 00:07:19.566 [2024-07-13 07:53:25.242900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.566 [2024-07-13 07:53:25.275754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.566 07:53:25 -- accel/accel.sh@21 -- # val= 00:07:19.566 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.566 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.566 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.566 07:53:25 -- accel/accel.sh@21 -- # val= 00:07:19.566 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.566 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.566 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.566 07:53:25 -- accel/accel.sh@21 -- # val=0x1 00:07:19.566 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.566 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.566 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.567 07:53:25 -- accel/accel.sh@21 -- # val= 00:07:19.567 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.567 07:53:25 -- accel/accel.sh@21 -- # val= 00:07:19.567 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.567 07:53:25 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:07:19.567 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.567 07:53:25 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.567 07:53:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.567 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.567 07:53:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:19.567 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.567 07:53:25 -- accel/accel.sh@21 -- # val= 00:07:19.567 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.567 07:53:25 -- accel/accel.sh@21 -- # val=software 00:07:19.567 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.567 07:53:25 -- accel/accel.sh@23 -- # accel_module=software 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.567 07:53:25 -- accel/accel.sh@21 -- # val=32 00:07:19.567 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.567 07:53:25 -- accel/accel.sh@21 -- # val=32 00:07:19.567 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.567 07:53:25 -- accel/accel.sh@21 -- # val=1 00:07:19.567 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.567 07:53:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:19.567 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.567 07:53:25 -- accel/accel.sh@21 -- # val=No 00:07:19.567 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.567 07:53:25 -- accel/accel.sh@21 -- # val= 00:07:19.567 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:19.567 07:53:25 -- accel/accel.sh@21 -- # val= 00:07:19.567 07:53:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # IFS=: 00:07:19.567 07:53:25 -- accel/accel.sh@20 -- # read -r var val 00:07:20.944 07:53:26 -- accel/accel.sh@21 -- # val= 00:07:20.944 07:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.944 07:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.944 07:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.944 07:53:26 -- accel/accel.sh@21 -- # val= 00:07:20.944 07:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.944 07:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.944 07:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.944 07:53:26 -- accel/accel.sh@21 -- # val= 00:07:20.944 07:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.944 07:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.944 07:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.944 07:53:26 -- accel/accel.sh@21 -- # val= 00:07:20.944 07:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.944 07:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.944 07:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.944 07:53:26 -- accel/accel.sh@21 -- # val= 00:07:20.944 07:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.944 07:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.944 07:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.944 07:53:26 -- accel/accel.sh@21 -- # val= 00:07:20.944 07:53:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:20.944 07:53:26 -- accel/accel.sh@20 -- # IFS=: 00:07:20.944 07:53:26 -- accel/accel.sh@20 -- # read -r var val 00:07:20.944 07:53:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:20.944 07:53:26 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:07:20.944 07:53:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:20.944 00:07:20.944 real 0m2.667s 00:07:20.944 user 0m2.325s 00:07:20.944 sys 0m0.143s 00:07:20.944 07:53:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:20.944 07:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:20.944 ************************************ 00:07:20.944 END TEST accel_dif_generate_copy 00:07:20.944 ************************************ 00:07:20.944 07:53:26 -- accel/accel.sh@107 -- # [[ y == y ]] 00:07:20.944 07:53:26 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:20.944 07:53:26 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:20.944 07:53:26 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:20.944 07:53:26 -- common/autotest_common.sh@10 -- # set +x 00:07:20.944 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:20.944 ************************************ 00:07:20.944 START TEST accel_comp 00:07:20.944 ************************************ 00:07:20.944 07:53:26 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:20.944 07:53:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:20.944 07:53:26 -- accel/accel.sh@17 -- # local accel_module 00:07:20.944 07:53:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:20.944 07:53:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:20.944 07:53:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:20.944 07:53:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:20.944 07:53:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:20.944 07:53:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:20.944 07:53:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:20.944 07:53:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:20.944 07:53:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:20.944 07:53:26 -- accel/accel.sh@42 -- # jq -r . 00:07:20.944 [2024-07-13 07:53:26.496053] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:20.944 [2024-07-13 07:53:26.496836] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68105 ] 00:07:20.944 [2024-07-13 07:53:26.633922] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.944 [2024-07-13 07:53:26.668091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.324 07:53:27 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:22.324 00:07:22.324 SPDK Configuration: 00:07:22.324 Core mask: 0x1 00:07:22.324 00:07:22.324 Accel Perf Configuration: 00:07:22.324 Workload Type: compress 00:07:22.324 Transfer size: 4096 bytes 00:07:22.324 Vector count 1 00:07:22.324 Module: software 00:07:22.324 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.324 Queue depth: 32 00:07:22.324 Allocate depth: 32 00:07:22.324 # threads/core: 1 00:07:22.324 Run time: 1 seconds 00:07:22.324 Verify: No 00:07:22.324 00:07:22.324 Running for 1 seconds... 00:07:22.324 00:07:22.324 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:22.324 ------------------------------------------------------------------------------------ 00:07:22.324 0,0 56608/s 235 MiB/s 0 0 00:07:22.324 ==================================================================================== 00:07:22.324 Total 56608/s 221 MiB/s 0 0' 00:07:22.324 07:53:27 -- accel/accel.sh@20 -- # IFS=: 00:07:22.324 07:53:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.324 07:53:27 -- accel/accel.sh@20 -- # read -r var val 00:07:22.324 07:53:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:22.324 07:53:27 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.324 07:53:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:22.324 07:53:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.324 07:53:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.324 07:53:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:22.324 07:53:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:22.324 07:53:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:22.324 07:53:27 -- accel/accel.sh@42 -- # jq -r . 00:07:22.324 [2024-07-13 07:53:27.823261] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:22.324 [2024-07-13 07:53:27.823361] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68113 ] 00:07:22.324 [2024-07-13 07:53:27.959819] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.324 [2024-07-13 07:53:27.992038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.324 07:53:28 -- accel/accel.sh@21 -- # val= 00:07:22.324 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.324 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.324 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.324 07:53:28 -- accel/accel.sh@21 -- # val= 00:07:22.324 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.324 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.324 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.324 07:53:28 -- accel/accel.sh@21 -- # val= 00:07:22.324 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.324 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.324 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.324 07:53:28 -- accel/accel.sh@21 -- # val=0x1 00:07:22.324 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.324 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.324 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.324 07:53:28 -- accel/accel.sh@21 -- # val= 00:07:22.324 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.324 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.324 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.324 07:53:28 -- accel/accel.sh@21 -- # val= 00:07:22.324 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.324 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.324 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.324 07:53:28 -- accel/accel.sh@21 -- # val=compress 00:07:22.324 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.324 07:53:28 -- accel/accel.sh@24 -- # accel_opc=compress 00:07:22.324 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.324 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.325 07:53:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:22.325 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.325 07:53:28 -- accel/accel.sh@21 -- # val= 00:07:22.325 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.325 07:53:28 -- accel/accel.sh@21 -- # val=software 00:07:22.325 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.325 07:53:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.325 07:53:28 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:22.325 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.325 07:53:28 -- accel/accel.sh@21 -- # val=32 00:07:22.325 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.325 07:53:28 -- accel/accel.sh@21 -- # val=32 00:07:22.325 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.325 07:53:28 -- accel/accel.sh@21 -- # val=1 00:07:22.325 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.325 07:53:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:22.325 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.325 07:53:28 -- accel/accel.sh@21 -- # val=No 00:07:22.325 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.325 07:53:28 -- accel/accel.sh@21 -- # val= 00:07:22.325 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:22.325 07:53:28 -- accel/accel.sh@21 -- # val= 00:07:22.325 07:53:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # IFS=: 00:07:22.325 07:53:28 -- accel/accel.sh@20 -- # read -r var val 00:07:23.703 07:53:29 -- accel/accel.sh@21 -- # val= 00:07:23.703 07:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.703 07:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.703 07:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.703 07:53:29 -- accel/accel.sh@21 -- # val= 00:07:23.703 07:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.703 07:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.703 07:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.703 07:53:29 -- accel/accel.sh@21 -- # val= 00:07:23.703 07:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.703 07:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.703 07:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.703 07:53:29 -- accel/accel.sh@21 -- # val= 00:07:23.703 07:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.703 07:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.703 07:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.703 07:53:29 -- accel/accel.sh@21 -- # val= 00:07:23.703 07:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.703 07:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.703 07:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.703 07:53:29 -- accel/accel.sh@21 -- # val= 00:07:23.703 07:53:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:23.703 07:53:29 -- accel/accel.sh@20 -- # IFS=: 00:07:23.703 07:53:29 -- accel/accel.sh@20 -- # read -r var val 00:07:23.703 07:53:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:23.703 07:53:29 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:07:23.703 07:53:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:23.703 00:07:23.703 real 0m2.666s 00:07:23.703 user 0m2.331s 00:07:23.703 sys 0m0.136s 00:07:23.703 ************************************ 00:07:23.703 END TEST accel_comp 00:07:23.703 ************************************ 00:07:23.703 07:53:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:23.703 07:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:23.703 07:53:29 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:23.703 07:53:29 -- common/autotest_common.sh@1077 -- # '[' 9 -le 1 ']' 00:07:23.703 07:53:29 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:23.703 07:53:29 -- common/autotest_common.sh@10 -- # set +x 00:07:23.703 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:23.703 ************************************ 00:07:23.703 START TEST accel_decomp 00:07:23.703 ************************************ 00:07:23.703 07:53:29 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:23.703 07:53:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:23.703 07:53:29 -- accel/accel.sh@17 -- # local accel_module 00:07:23.703 07:53:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:23.703 07:53:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:23.703 07:53:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:23.703 07:53:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:23.703 07:53:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:23.703 07:53:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:23.703 07:53:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:23.703 07:53:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:23.703 07:53:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:23.703 07:53:29 -- accel/accel.sh@42 -- # jq -r . 00:07:23.703 [2024-07-13 07:53:29.214298] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:23.703 [2024-07-13 07:53:29.214393] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68140 ] 00:07:23.703 [2024-07-13 07:53:29.351164] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.703 [2024-07-13 07:53:29.383358] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.081 07:53:30 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:25.081 00:07:25.081 SPDK Configuration: 00:07:25.081 Core mask: 0x1 00:07:25.081 00:07:25.081 Accel Perf Configuration: 00:07:25.081 Workload Type: decompress 00:07:25.081 Transfer size: 4096 bytes 00:07:25.081 Vector count 1 00:07:25.081 Module: software 00:07:25.081 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.081 Queue depth: 32 00:07:25.081 Allocate depth: 32 00:07:25.081 # threads/core: 1 00:07:25.081 Run time: 1 seconds 00:07:25.081 Verify: Yes 00:07:25.081 00:07:25.081 Running for 1 seconds... 00:07:25.081 00:07:25.081 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:25.081 ------------------------------------------------------------------------------------ 00:07:25.081 0,0 77536/s 142 MiB/s 0 0 00:07:25.081 ==================================================================================== 00:07:25.081 Total 77536/s 302 MiB/s 0 0' 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:25.081 07:53:30 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:07:25.081 07:53:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:25.081 07:53:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:25.081 07:53:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.081 07:53:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.081 07:53:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:25.081 07:53:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:25.081 07:53:30 -- accel/accel.sh@41 -- # local IFS=, 00:07:25.081 07:53:30 -- accel/accel.sh@42 -- # jq -r . 00:07:25.081 [2024-07-13 07:53:30.523123] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:25.081 [2024-07-13 07:53:30.523250] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68149 ] 00:07:25.081 [2024-07-13 07:53:30.650942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:25.081 [2024-07-13 07:53:30.683154] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val= 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val= 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val= 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val=0x1 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val= 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val= 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val=decompress 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val= 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val=software 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@23 -- # accel_module=software 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val=32 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val=32 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val=1 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val=Yes 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val= 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:25.081 07:53:30 -- accel/accel.sh@21 -- # val= 00:07:25.081 07:53:30 -- accel/accel.sh@22 -- # case "$var" in 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # IFS=: 00:07:25.081 07:53:30 -- accel/accel.sh@20 -- # read -r var val 00:07:26.019 07:53:31 -- accel/accel.sh@21 -- # val= 00:07:26.019 07:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.019 07:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.019 07:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.019 07:53:31 -- accel/accel.sh@21 -- # val= 00:07:26.019 07:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.019 07:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.019 07:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.019 07:53:31 -- accel/accel.sh@21 -- # val= 00:07:26.019 07:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.019 07:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.019 07:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.019 07:53:31 -- accel/accel.sh@21 -- # val= 00:07:26.019 07:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.019 07:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.019 07:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.019 07:53:31 -- accel/accel.sh@21 -- # val= 00:07:26.019 07:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.019 07:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.019 07:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.019 07:53:31 -- accel/accel.sh@21 -- # val= 00:07:26.019 07:53:31 -- accel/accel.sh@22 -- # case "$var" in 00:07:26.019 07:53:31 -- accel/accel.sh@20 -- # IFS=: 00:07:26.019 07:53:31 -- accel/accel.sh@20 -- # read -r var val 00:07:26.019 07:53:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:26.019 07:53:31 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:26.019 07:53:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:26.019 00:07:26.019 real 0m2.617s 00:07:26.019 user 0m2.289s 00:07:26.019 sys 0m0.128s 00:07:26.019 07:53:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.019 07:53:31 -- common/autotest_common.sh@10 -- # set +x 00:07:26.019 ************************************ 00:07:26.019 END TEST accel_decomp 00:07:26.019 ************************************ 00:07:26.278 07:53:31 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:26.278 07:53:31 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:26.278 07:53:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:26.278 07:53:31 -- common/autotest_common.sh@10 -- # set +x 00:07:26.278 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:26.278 ************************************ 00:07:26.278 START TEST accel_decmop_full 00:07:26.278 ************************************ 00:07:26.278 07:53:31 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:26.278 07:53:31 -- accel/accel.sh@16 -- # local accel_opc 00:07:26.278 07:53:31 -- accel/accel.sh@17 -- # local accel_module 00:07:26.278 07:53:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:26.278 07:53:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:26.278 07:53:31 -- accel/accel.sh@12 -- # build_accel_config 00:07:26.278 07:53:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:26.278 07:53:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:26.278 07:53:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:26.278 07:53:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:26.278 07:53:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:26.278 07:53:31 -- accel/accel.sh@41 -- # local IFS=, 00:07:26.278 07:53:31 -- accel/accel.sh@42 -- # jq -r . 00:07:26.278 [2024-07-13 07:53:31.881432] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:26.278 [2024-07-13 07:53:31.881495] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68172 ] 00:07:26.278 [2024-07-13 07:53:32.005030] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.278 [2024-07-13 07:53:32.036423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.664 07:53:33 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:27.664 00:07:27.664 SPDK Configuration: 00:07:27.664 Core mask: 0x1 00:07:27.664 00:07:27.664 Accel Perf Configuration: 00:07:27.664 Workload Type: decompress 00:07:27.664 Transfer size: 111250 bytes 00:07:27.664 Vector count 1 00:07:27.664 Module: software 00:07:27.664 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.664 Queue depth: 32 00:07:27.664 Allocate depth: 32 00:07:27.664 # threads/core: 1 00:07:27.664 Run time: 1 seconds 00:07:27.664 Verify: Yes 00:07:27.664 00:07:27.664 Running for 1 seconds... 00:07:27.664 00:07:27.664 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:27.664 ------------------------------------------------------------------------------------ 00:07:27.664 0,0 5312/s 219 MiB/s 0 0 00:07:27.664 ==================================================================================== 00:07:27.664 Total 5312/s 563 MiB/s 0 0' 00:07:27.664 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.664 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.664 07:53:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:27.664 07:53:33 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:07:27.664 07:53:33 -- accel/accel.sh@12 -- # build_accel_config 00:07:27.664 07:53:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:27.664 07:53:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:27.664 07:53:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:27.664 07:53:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:27.664 07:53:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:27.664 07:53:33 -- accel/accel.sh@41 -- # local IFS=, 00:07:27.664 07:53:33 -- accel/accel.sh@42 -- # jq -r . 00:07:27.664 [2024-07-13 07:53:33.182135] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:27.664 [2024-07-13 07:53:33.182226] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68180 ] 00:07:27.664 [2024-07-13 07:53:33.317088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.664 [2024-07-13 07:53:33.348410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.664 07:53:33 -- accel/accel.sh@21 -- # val= 00:07:27.664 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.664 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.664 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.664 07:53:33 -- accel/accel.sh@21 -- # val= 00:07:27.664 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.664 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.665 07:53:33 -- accel/accel.sh@21 -- # val= 00:07:27.665 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.665 07:53:33 -- accel/accel.sh@21 -- # val=0x1 00:07:27.665 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.665 07:53:33 -- accel/accel.sh@21 -- # val= 00:07:27.665 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.665 07:53:33 -- accel/accel.sh@21 -- # val= 00:07:27.665 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.665 07:53:33 -- accel/accel.sh@21 -- # val=decompress 00:07:27.665 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.665 07:53:33 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.665 07:53:33 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:27.665 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.665 07:53:33 -- accel/accel.sh@21 -- # val= 00:07:27.665 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.665 07:53:33 -- accel/accel.sh@21 -- # val=software 00:07:27.665 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.665 07:53:33 -- accel/accel.sh@23 -- # accel_module=software 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.665 07:53:33 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:27.665 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.665 07:53:33 -- accel/accel.sh@21 -- # val=32 00:07:27.665 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.665 07:53:33 -- accel/accel.sh@21 -- # val=32 00:07:27.665 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.665 07:53:33 -- accel/accel.sh@21 -- # val=1 00:07:27.665 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.665 07:53:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:27.665 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.665 07:53:33 -- accel/accel.sh@21 -- # val=Yes 00:07:27.665 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.665 07:53:33 -- accel/accel.sh@21 -- # val= 00:07:27.665 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:27.665 07:53:33 -- accel/accel.sh@21 -- # val= 00:07:27.665 07:53:33 -- accel/accel.sh@22 -- # case "$var" in 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # IFS=: 00:07:27.665 07:53:33 -- accel/accel.sh@20 -- # read -r var val 00:07:29.042 07:53:34 -- accel/accel.sh@21 -- # val= 00:07:29.042 07:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.042 07:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.042 07:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.042 07:53:34 -- accel/accel.sh@21 -- # val= 00:07:29.042 07:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.042 07:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.042 07:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.042 07:53:34 -- accel/accel.sh@21 -- # val= 00:07:29.042 07:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.042 07:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.042 07:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.042 07:53:34 -- accel/accel.sh@21 -- # val= 00:07:29.042 07:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.042 07:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.042 07:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.042 07:53:34 -- accel/accel.sh@21 -- # val= 00:07:29.042 07:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.042 07:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.042 07:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.042 07:53:34 -- accel/accel.sh@21 -- # val= 00:07:29.042 07:53:34 -- accel/accel.sh@22 -- # case "$var" in 00:07:29.042 07:53:34 -- accel/accel.sh@20 -- # IFS=: 00:07:29.042 07:53:34 -- accel/accel.sh@20 -- # read -r var val 00:07:29.042 07:53:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:29.042 07:53:34 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:29.042 07:53:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:29.042 00:07:29.042 real 0m2.645s 00:07:29.042 user 0m2.305s 00:07:29.042 sys 0m0.139s 00:07:29.042 07:53:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:29.042 07:53:34 -- common/autotest_common.sh@10 -- # set +x 00:07:29.042 ************************************ 00:07:29.042 END TEST accel_decmop_full 00:07:29.042 ************************************ 00:07:29.042 07:53:34 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:29.042 07:53:34 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:29.042 07:53:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:29.042 07:53:34 -- common/autotest_common.sh@10 -- # set +x 00:07:29.042 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:29.042 ************************************ 00:07:29.042 START TEST accel_decomp_mcore 00:07:29.042 ************************************ 00:07:29.042 07:53:34 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:29.042 07:53:34 -- accel/accel.sh@16 -- # local accel_opc 00:07:29.042 07:53:34 -- accel/accel.sh@17 -- # local accel_module 00:07:29.042 07:53:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:29.042 07:53:34 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:29.042 07:53:34 -- accel/accel.sh@12 -- # build_accel_config 00:07:29.042 07:53:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.042 07:53:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.042 07:53:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.042 07:53:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.042 07:53:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.042 07:53:34 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.042 07:53:34 -- accel/accel.sh@42 -- # jq -r . 00:07:29.042 [2024-07-13 07:53:34.572756] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:29.042 [2024-07-13 07:53:34.572886] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68208 ] 00:07:29.042 [2024-07-13 07:53:34.709549] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:29.042 [2024-07-13 07:53:34.742643] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:29.042 [2024-07-13 07:53:34.742819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:29.042 [2024-07-13 07:53:34.742931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:29.042 [2024-07-13 07:53:34.743194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.442 07:53:35 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:30.442 00:07:30.442 SPDK Configuration: 00:07:30.442 Core mask: 0xf 00:07:30.442 00:07:30.442 Accel Perf Configuration: 00:07:30.442 Workload Type: decompress 00:07:30.442 Transfer size: 4096 bytes 00:07:30.442 Vector count 1 00:07:30.442 Module: software 00:07:30.442 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.442 Queue depth: 32 00:07:30.442 Allocate depth: 32 00:07:30.442 # threads/core: 1 00:07:30.442 Run time: 1 seconds 00:07:30.442 Verify: Yes 00:07:30.442 00:07:30.442 Running for 1 seconds... 00:07:30.442 00:07:30.442 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:30.442 ------------------------------------------------------------------------------------ 00:07:30.442 0,0 63040/s 116 MiB/s 0 0 00:07:30.442 3,0 61408/s 113 MiB/s 0 0 00:07:30.442 2,0 60352/s 111 MiB/s 0 0 00:07:30.442 1,0 60032/s 110 MiB/s 0 0 00:07:30.442 ==================================================================================== 00:07:30.442 Total 244832/s 956 MiB/s 0 0' 00:07:30.442 07:53:35 -- accel/accel.sh@20 -- # IFS=: 00:07:30.442 07:53:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:30.442 07:53:35 -- accel/accel.sh@20 -- # read -r var val 00:07:30.442 07:53:35 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:07:30.442 07:53:35 -- accel/accel.sh@12 -- # build_accel_config 00:07:30.442 07:53:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:30.442 07:53:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:30.442 07:53:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:30.442 07:53:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:30.442 07:53:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:30.442 07:53:35 -- accel/accel.sh@41 -- # local IFS=, 00:07:30.442 07:53:35 -- accel/accel.sh@42 -- # jq -r . 00:07:30.442 [2024-07-13 07:53:35.892708] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:30.442 [2024-07-13 07:53:35.892817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68219 ] 00:07:30.442 [2024-07-13 07:53:36.024829] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:30.442 [2024-07-13 07:53:36.056362] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.442 [2024-07-13 07:53:36.056507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:30.442 [2024-07-13 07:53:36.056636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:30.442 [2024-07-13 07:53:36.056637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.442 07:53:36 -- accel/accel.sh@21 -- # val= 00:07:30.442 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val= 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val= 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val=0xf 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val= 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val= 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val=decompress 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val= 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val=software 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@23 -- # accel_module=software 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val=32 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val=32 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val=1 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val=Yes 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val= 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:30.443 07:53:36 -- accel/accel.sh@21 -- # val= 00:07:30.443 07:53:36 -- accel/accel.sh@22 -- # case "$var" in 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # IFS=: 00:07:30.443 07:53:36 -- accel/accel.sh@20 -- # read -r var val 00:07:31.378 07:53:37 -- accel/accel.sh@21 -- # val= 00:07:31.378 07:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.378 07:53:37 -- accel/accel.sh@21 -- # val= 00:07:31.378 07:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.378 07:53:37 -- accel/accel.sh@21 -- # val= 00:07:31.378 07:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.378 07:53:37 -- accel/accel.sh@21 -- # val= 00:07:31.378 07:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.378 07:53:37 -- accel/accel.sh@21 -- # val= 00:07:31.378 07:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.378 07:53:37 -- accel/accel.sh@21 -- # val= 00:07:31.378 07:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.378 07:53:37 -- accel/accel.sh@21 -- # val= 00:07:31.378 07:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.378 07:53:37 -- accel/accel.sh@21 -- # val= 00:07:31.378 07:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.378 07:53:37 -- accel/accel.sh@21 -- # val= 00:07:31.378 07:53:37 -- accel/accel.sh@22 -- # case "$var" in 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # IFS=: 00:07:31.378 07:53:37 -- accel/accel.sh@20 -- # read -r var val 00:07:31.378 07:53:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:31.378 07:53:37 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:31.379 07:53:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:31.379 00:07:31.379 real 0m2.636s 00:07:31.379 user 0m8.691s 00:07:31.379 sys 0m0.159s 00:07:31.379 07:53:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:31.379 07:53:37 -- common/autotest_common.sh@10 -- # set +x 00:07:31.379 ************************************ 00:07:31.379 END TEST accel_decomp_mcore 00:07:31.379 ************************************ 00:07:31.637 07:53:37 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:31.637 07:53:37 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:31.637 07:53:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:31.637 07:53:37 -- common/autotest_common.sh@10 -- # set +x 00:07:31.637 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:31.637 ************************************ 00:07:31.637 START TEST accel_decomp_full_mcore 00:07:31.637 ************************************ 00:07:31.637 07:53:37 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:31.637 07:53:37 -- accel/accel.sh@16 -- # local accel_opc 00:07:31.637 07:53:37 -- accel/accel.sh@17 -- # local accel_module 00:07:31.637 07:53:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:31.637 07:53:37 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:31.637 07:53:37 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.637 07:53:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.637 07:53:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.637 07:53:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.637 07:53:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.637 07:53:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.637 07:53:37 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.637 07:53:37 -- accel/accel.sh@42 -- # jq -r . 00:07:31.637 [2024-07-13 07:53:37.252821] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:31.637 [2024-07-13 07:53:37.252899] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68245 ] 00:07:31.637 [2024-07-13 07:53:37.382284] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:31.637 [2024-07-13 07:53:37.413547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:31.637 [2024-07-13 07:53:37.413657] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:31.637 [2024-07-13 07:53:37.413842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:31.637 [2024-07-13 07:53:37.413843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.013 07:53:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:33.013 00:07:33.013 SPDK Configuration: 00:07:33.013 Core mask: 0xf 00:07:33.013 00:07:33.013 Accel Perf Configuration: 00:07:33.013 Workload Type: decompress 00:07:33.013 Transfer size: 111250 bytes 00:07:33.013 Vector count 1 00:07:33.013 Module: software 00:07:33.013 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.013 Queue depth: 32 00:07:33.013 Allocate depth: 32 00:07:33.013 # threads/core: 1 00:07:33.013 Run time: 1 seconds 00:07:33.013 Verify: Yes 00:07:33.013 00:07:33.013 Running for 1 seconds... 00:07:33.013 00:07:33.013 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.013 ------------------------------------------------------------------------------------ 00:07:33.013 0,0 4736/s 195 MiB/s 0 0 00:07:33.013 3,0 4768/s 196 MiB/s 0 0 00:07:33.013 2,0 4800/s 198 MiB/s 0 0 00:07:33.013 1,0 4768/s 196 MiB/s 0 0 00:07:33.013 ==================================================================================== 00:07:33.013 Total 19072/s 2023 MiB/s 0 0' 00:07:33.013 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.013 07:53:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:33.013 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.013 07:53:38 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:33.013 07:53:38 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.013 07:53:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.013 07:53:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.013 07:53:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.013 07:53:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.013 07:53:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.013 07:53:38 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.013 07:53:38 -- accel/accel.sh@42 -- # jq -r . 00:07:33.013 [2024-07-13 07:53:38.591612] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:33.013 [2024-07-13 07:53:38.591708] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68256 ] 00:07:33.013 [2024-07-13 07:53:38.726947] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:33.013 [2024-07-13 07:53:38.758062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.013 [2024-07-13 07:53:38.758207] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:33.013 [2024-07-13 07:53:38.758345] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:33.013 [2024-07-13 07:53:38.758636] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.013 07:53:38 -- accel/accel.sh@21 -- # val= 00:07:33.013 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.013 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.013 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.013 07:53:38 -- accel/accel.sh@21 -- # val= 00:07:33.013 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.013 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.013 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.013 07:53:38 -- accel/accel.sh@21 -- # val= 00:07:33.013 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.013 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.013 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.013 07:53:38 -- accel/accel.sh@21 -- # val=0xf 00:07:33.013 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.013 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.013 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.013 07:53:38 -- accel/accel.sh@21 -- # val= 00:07:33.013 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.013 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.013 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.013 07:53:38 -- accel/accel.sh@21 -- # val= 00:07:33.013 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.013 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.013 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.013 07:53:38 -- accel/accel.sh@21 -- # val=decompress 00:07:33.013 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.014 07:53:38 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.014 07:53:38 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:33.014 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.014 07:53:38 -- accel/accel.sh@21 -- # val= 00:07:33.014 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.014 07:53:38 -- accel/accel.sh@21 -- # val=software 00:07:33.014 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.014 07:53:38 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.014 07:53:38 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:33.014 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.014 07:53:38 -- accel/accel.sh@21 -- # val=32 00:07:33.014 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.014 07:53:38 -- accel/accel.sh@21 -- # val=32 00:07:33.014 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.014 07:53:38 -- accel/accel.sh@21 -- # val=1 00:07:33.014 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.014 07:53:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.014 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.014 07:53:38 -- accel/accel.sh@21 -- # val=Yes 00:07:33.014 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.014 07:53:38 -- accel/accel.sh@21 -- # val= 00:07:33.014 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:33.014 07:53:38 -- accel/accel.sh@21 -- # val= 00:07:33.014 07:53:38 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # IFS=: 00:07:33.014 07:53:38 -- accel/accel.sh@20 -- # read -r var val 00:07:34.389 07:53:39 -- accel/accel.sh@21 -- # val= 00:07:34.389 07:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.389 07:53:39 -- accel/accel.sh@21 -- # val= 00:07:34.389 07:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.389 07:53:39 -- accel/accel.sh@21 -- # val= 00:07:34.389 07:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.389 07:53:39 -- accel/accel.sh@21 -- # val= 00:07:34.389 07:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.389 07:53:39 -- accel/accel.sh@21 -- # val= 00:07:34.389 07:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.389 07:53:39 -- accel/accel.sh@21 -- # val= 00:07:34.389 07:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.389 07:53:39 -- accel/accel.sh@21 -- # val= 00:07:34.389 07:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.389 07:53:39 -- accel/accel.sh@21 -- # val= 00:07:34.389 07:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.389 07:53:39 -- accel/accel.sh@21 -- # val= 00:07:34.389 07:53:39 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # IFS=: 00:07:34.389 07:53:39 -- accel/accel.sh@20 -- # read -r var val 00:07:34.389 07:53:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.389 07:53:39 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:34.389 07:53:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.389 00:07:34.389 real 0m2.664s 00:07:34.389 user 0m8.790s 00:07:34.389 sys 0m0.154s 00:07:34.389 07:53:39 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.389 07:53:39 -- common/autotest_common.sh@10 -- # set +x 00:07:34.389 ************************************ 00:07:34.389 END TEST accel_decomp_full_mcore 00:07:34.389 ************************************ 00:07:34.389 07:53:39 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:34.389 07:53:39 -- common/autotest_common.sh@1077 -- # '[' 11 -le 1 ']' 00:07:34.389 07:53:39 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:34.389 07:53:39 -- common/autotest_common.sh@10 -- # set +x 00:07:34.389 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:34.389 ************************************ 00:07:34.389 START TEST accel_decomp_mthread 00:07:34.389 ************************************ 00:07:34.389 07:53:39 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:34.389 07:53:39 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.389 07:53:39 -- accel/accel.sh@17 -- # local accel_module 00:07:34.389 07:53:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:34.390 07:53:39 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:34.390 07:53:39 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.390 07:53:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.390 07:53:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.390 07:53:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.390 07:53:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.390 07:53:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.390 07:53:39 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.390 07:53:39 -- accel/accel.sh@42 -- # jq -r . 00:07:34.390 [2024-07-13 07:53:39.962561] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:34.390 [2024-07-13 07:53:39.962642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68287 ] 00:07:34.390 [2024-07-13 07:53:40.096034] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.390 [2024-07-13 07:53:40.125448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.766 07:53:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:35.766 00:07:35.766 SPDK Configuration: 00:07:35.766 Core mask: 0x1 00:07:35.766 00:07:35.766 Accel Perf Configuration: 00:07:35.766 Workload Type: decompress 00:07:35.766 Transfer size: 4096 bytes 00:07:35.766 Vector count 1 00:07:35.766 Module: software 00:07:35.766 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.766 Queue depth: 32 00:07:35.766 Allocate depth: 32 00:07:35.766 # threads/core: 2 00:07:35.766 Run time: 1 seconds 00:07:35.766 Verify: Yes 00:07:35.766 00:07:35.766 Running for 1 seconds... 00:07:35.766 00:07:35.766 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:35.766 ------------------------------------------------------------------------------------ 00:07:35.766 0,1 39392/s 72 MiB/s 0 0 00:07:35.766 0,0 39296/s 72 MiB/s 0 0 00:07:35.766 ==================================================================================== 00:07:35.766 Total 78688/s 307 MiB/s 0 0' 00:07:35.766 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.766 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.766 07:53:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:35.766 07:53:41 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:07:35.766 07:53:41 -- accel/accel.sh@12 -- # build_accel_config 00:07:35.766 07:53:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:35.766 07:53:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:35.766 07:53:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:35.766 07:53:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:35.766 07:53:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:35.766 07:53:41 -- accel/accel.sh@41 -- # local IFS=, 00:07:35.766 07:53:41 -- accel/accel.sh@42 -- # jq -r . 00:07:35.766 [2024-07-13 07:53:41.266303] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:35.766 [2024-07-13 07:53:41.266387] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68295 ] 00:07:35.766 [2024-07-13 07:53:41.401598] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.766 [2024-07-13 07:53:41.430229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.766 07:53:41 -- accel/accel.sh@21 -- # val= 00:07:35.766 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.766 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.766 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.766 07:53:41 -- accel/accel.sh@21 -- # val= 00:07:35.766 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.766 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.766 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.766 07:53:41 -- accel/accel.sh@21 -- # val= 00:07:35.767 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.767 07:53:41 -- accel/accel.sh@21 -- # val=0x1 00:07:35.767 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.767 07:53:41 -- accel/accel.sh@21 -- # val= 00:07:35.767 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.767 07:53:41 -- accel/accel.sh@21 -- # val= 00:07:35.767 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.767 07:53:41 -- accel/accel.sh@21 -- # val=decompress 00:07:35.767 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.767 07:53:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.767 07:53:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:35.767 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.767 07:53:41 -- accel/accel.sh@21 -- # val= 00:07:35.767 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.767 07:53:41 -- accel/accel.sh@21 -- # val=software 00:07:35.767 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.767 07:53:41 -- accel/accel.sh@23 -- # accel_module=software 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.767 07:53:41 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:35.767 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.767 07:53:41 -- accel/accel.sh@21 -- # val=32 00:07:35.767 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.767 07:53:41 -- accel/accel.sh@21 -- # val=32 00:07:35.767 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.767 07:53:41 -- accel/accel.sh@21 -- # val=2 00:07:35.767 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.767 07:53:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:35.767 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.767 07:53:41 -- accel/accel.sh@21 -- # val=Yes 00:07:35.767 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.767 07:53:41 -- accel/accel.sh@21 -- # val= 00:07:35.767 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:35.767 07:53:41 -- accel/accel.sh@21 -- # val= 00:07:35.767 07:53:41 -- accel/accel.sh@22 -- # case "$var" in 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # IFS=: 00:07:35.767 07:53:41 -- accel/accel.sh@20 -- # read -r var val 00:07:37.145 07:53:42 -- accel/accel.sh@21 -- # val= 00:07:37.145 07:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.145 07:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.145 07:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.145 07:53:42 -- accel/accel.sh@21 -- # val= 00:07:37.145 07:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.145 07:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.145 07:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.145 07:53:42 -- accel/accel.sh@21 -- # val= 00:07:37.145 07:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.145 07:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.145 07:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.145 07:53:42 -- accel/accel.sh@21 -- # val= 00:07:37.145 07:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.145 07:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.145 07:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.145 07:53:42 -- accel/accel.sh@21 -- # val= 00:07:37.145 07:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.145 07:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.145 07:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.145 07:53:42 -- accel/accel.sh@21 -- # val= 00:07:37.145 07:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.145 07:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.145 07:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.145 07:53:42 -- accel/accel.sh@21 -- # val= 00:07:37.145 07:53:42 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.145 07:53:42 -- accel/accel.sh@20 -- # IFS=: 00:07:37.145 07:53:42 -- accel/accel.sh@20 -- # read -r var val 00:07:37.145 07:53:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.145 07:53:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:37.145 07:53:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.145 00:07:37.145 real 0m2.609s 00:07:37.145 user 0m2.266s 00:07:37.145 sys 0m0.147s 00:07:37.145 07:53:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:37.145 ************************************ 00:07:37.145 END TEST accel_decomp_mthread 00:07:37.145 ************************************ 00:07:37.145 07:53:42 -- common/autotest_common.sh@10 -- # set +x 00:07:37.145 07:53:42 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.145 07:53:42 -- common/autotest_common.sh@1077 -- # '[' 13 -le 1 ']' 00:07:37.145 07:53:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:37.145 07:53:42 -- common/autotest_common.sh@10 -- # set +x 00:07:37.145 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:37.145 ************************************ 00:07:37.145 START TEST accel_deomp_full_mthread 00:07:37.145 ************************************ 00:07:37.145 07:53:42 -- common/autotest_common.sh@1104 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.145 07:53:42 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.145 07:53:42 -- accel/accel.sh@17 -- # local accel_module 00:07:37.145 07:53:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.145 07:53:42 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:37.145 07:53:42 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.145 07:53:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.145 07:53:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.145 07:53:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.145 07:53:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.145 07:53:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.145 07:53:42 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.145 07:53:42 -- accel/accel.sh@42 -- # jq -r . 00:07:37.145 [2024-07-13 07:53:42.623424] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:37.145 [2024-07-13 07:53:42.623511] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68318 ] 00:07:37.145 [2024-07-13 07:53:42.758067] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.145 [2024-07-13 07:53:42.787605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.519 07:53:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:07:38.519 00:07:38.519 SPDK Configuration: 00:07:38.519 Core mask: 0x1 00:07:38.519 00:07:38.519 Accel Perf Configuration: 00:07:38.519 Workload Type: decompress 00:07:38.519 Transfer size: 111250 bytes 00:07:38.519 Vector count 1 00:07:38.519 Module: software 00:07:38.519 File Name: /home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:38.519 Queue depth: 32 00:07:38.519 Allocate depth: 32 00:07:38.519 # threads/core: 2 00:07:38.519 Run time: 1 seconds 00:07:38.519 Verify: Yes 00:07:38.519 00:07:38.519 Running for 1 seconds... 00:07:38.519 00:07:38.519 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.519 ------------------------------------------------------------------------------------ 00:07:38.519 0,1 2688/s 111 MiB/s 0 0 00:07:38.519 0,0 2656/s 109 MiB/s 0 0 00:07:38.519 ==================================================================================== 00:07:38.519 Total 5344/s 566 MiB/s 0 0' 00:07:38.519 07:53:43 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:43 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.519 07:53:43 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.519 07:53:43 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:07:38.519 07:53:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.519 07:53:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.519 07:53:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.519 07:53:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.519 07:53:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.519 07:53:43 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.519 07:53:43 -- accel/accel.sh@42 -- # jq -r . 00:07:38.519 [2024-07-13 07:53:43.958003] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:38.519 [2024-07-13 07:53:43.958126] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68331 ] 00:07:38.519 [2024-07-13 07:53:44.085007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.519 [2024-07-13 07:53:44.113701] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val= 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val= 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val= 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val=0x1 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val= 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val= 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val=decompress 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val='111250 bytes' 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val= 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val=software 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@23 -- # accel_module=software 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val=32 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val=32 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val=2 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val=Yes 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val= 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:38.519 07:53:44 -- accel/accel.sh@21 -- # val= 00:07:38.519 07:53:44 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # IFS=: 00:07:38.519 07:53:44 -- accel/accel.sh@20 -- # read -r var val 00:07:39.456 07:53:45 -- accel/accel.sh@21 -- # val= 00:07:39.456 07:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.456 07:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:39.456 07:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:39.456 07:53:45 -- accel/accel.sh@21 -- # val= 00:07:39.456 07:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.456 07:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:39.456 07:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:39.456 07:53:45 -- accel/accel.sh@21 -- # val= 00:07:39.456 07:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.456 07:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:39.456 07:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:39.456 07:53:45 -- accel/accel.sh@21 -- # val= 00:07:39.456 07:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.456 07:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:39.456 07:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:39.456 07:53:45 -- accel/accel.sh@21 -- # val= 00:07:39.456 07:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.456 07:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:39.456 07:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:39.456 07:53:45 -- accel/accel.sh@21 -- # val= 00:07:39.456 07:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.456 07:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:39.456 07:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:39.456 07:53:45 -- accel/accel.sh@21 -- # val= 00:07:39.456 07:53:45 -- accel/accel.sh@22 -- # case "$var" in 00:07:39.456 07:53:45 -- accel/accel.sh@20 -- # IFS=: 00:07:39.456 07:53:45 -- accel/accel.sh@20 -- # read -r var val 00:07:39.456 07:53:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:39.456 07:53:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:07:39.456 07:53:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:39.456 00:07:39.456 real 0m2.659s 00:07:39.456 user 0m2.320s 00:07:39.456 sys 0m0.141s 00:07:39.456 07:53:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.456 ************************************ 00:07:39.456 07:53:45 -- common/autotest_common.sh@10 -- # set +x 00:07:39.457 END TEST accel_deomp_full_mthread 00:07:39.457 ************************************ 00:07:39.716 07:53:45 -- accel/accel.sh@116 -- # [[ n == y ]] 00:07:39.716 07:53:45 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:39.716 07:53:45 -- accel/accel.sh@129 -- # build_accel_config 00:07:39.716 07:53:45 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:39.716 07:53:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:39.716 07:53:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:39.716 07:53:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:39.716 07:53:45 -- common/autotest_common.sh@10 -- # set +x 00:07:39.716 07:53:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:39.716 07:53:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:39.716 07:53:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:39.716 07:53:45 -- accel/accel.sh@41 -- # local IFS=, 00:07:39.716 07:53:45 -- accel/accel.sh@42 -- # jq -r . 00:07:39.716 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:39.716 ************************************ 00:07:39.716 START TEST accel_dif_functional_tests 00:07:39.716 ************************************ 00:07:39.716 07:53:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:39.716 [2024-07-13 07:53:45.357621] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:39.716 [2024-07-13 07:53:45.357716] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68355 ] 00:07:39.716 [2024-07-13 07:53:45.497413] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.975 [2024-07-13 07:53:45.538307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:39.975 [2024-07-13 07:53:45.538452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.975 [2024-07-13 07:53:45.538460] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.975 00:07:39.975 00:07:39.975 CUnit - A unit testing framework for C - Version 2.1-3 00:07:39.975 http://cunit.sourceforge.net/ 00:07:39.975 00:07:39.975 00:07:39.975 Suite: accel_dif 00:07:39.975 Test: verify: DIF generated, GUARD check ...passed 00:07:39.975 Test: verify: DIF generated, APPTAG check ...passed 00:07:39.975 Test: verify: DIF generated, REFTAG check ...passed 00:07:39.975 Test: verify: DIF not generated, GUARD check ...[2024-07-13 07:53:45.591007] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:39.975 passed 00:07:39.975 Test: verify: DIF not generated, APPTAG check ...passed 00:07:39.975 Test: verify: DIF not generated, REFTAG check ...[2024-07-13 07:53:45.591085] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:39.975 [2024-07-13 07:53:45.591129] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:39.975 [2024-07-13 07:53:45.591159] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:39.975 [2024-07-13 07:53:45.591185] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:39.975 [2024-07-13 07:53:45.591213] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:39.975 passed 00:07:39.975 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:39.975 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:07:39.975 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:39.975 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-07-13 07:53:45.591275] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:39.975 passed 00:07:39.975 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:39.975 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:07:39.975 Test: generate copy: DIF generated, GUARD check ...[2024-07-13 07:53:45.591445] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:39.975 passed 00:07:39.975 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:39.975 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:39.975 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:39.975 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:39.975 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:39.975 Test: generate copy: iovecs-len validate ...[2024-07-13 07:53:45.591754] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:39.975 passed 00:07:39.975 Test: generate copy: buffer alignment validate ...passed 00:07:39.975 00:07:39.975 Run Summary: Type Total Ran Passed Failed Inactive 00:07:39.975 suites 1 1 n/a 0 0 00:07:39.975 tests 20 20 20 0 0 00:07:39.975 asserts 204 204 204 0 n/a 00:07:39.975 00:07:39.975 Elapsed time = 0.002 seconds 00:07:39.975 00:07:39.975 real 0m0.432s 00:07:39.975 user 0m0.513s 00:07:39.975 sys 0m0.098s 00:07:39.975 07:53:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.975 07:53:45 -- common/autotest_common.sh@10 -- # set +x 00:07:39.975 ************************************ 00:07:39.975 END TEST accel_dif_functional_tests 00:07:39.975 ************************************ 00:07:39.975 00:07:39.975 real 0m56.708s 00:07:39.975 user 1m2.889s 00:07:39.975 sys 0m5.898s 00:07:39.975 07:53:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.975 07:53:45 -- common/autotest_common.sh@10 -- # set +x 00:07:39.975 ************************************ 00:07:39.975 END TEST accel 00:07:39.975 ************************************ 00:07:40.234 07:53:45 -- spdk/autotest.sh@190 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:40.234 07:53:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:40.234 07:53:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.234 07:53:45 -- common/autotest_common.sh@10 -- # set +x 00:07:40.234 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:40.234 ************************************ 00:07:40.234 START TEST accel_rpc 00:07:40.234 ************************************ 00:07:40.234 07:53:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:07:40.234 * Looking for test storage... 00:07:40.234 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:07:40.234 07:53:45 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:40.234 07:53:45 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=68413 00:07:40.234 07:53:45 -- accel/accel_rpc.sh@15 -- # waitforlisten 68413 00:07:40.234 07:53:45 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:40.234 07:53:45 -- common/autotest_common.sh@819 -- # '[' -z 68413 ']' 00:07:40.234 07:53:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.234 07:53:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:40.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.234 07:53:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.234 07:53:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:40.234 07:53:45 -- common/autotest_common.sh@10 -- # set +x 00:07:40.234 [2024-07-13 07:53:45.976031] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:40.234 [2024-07-13 07:53:45.976151] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68413 ] 00:07:40.493 [2024-07-13 07:53:46.112146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.493 [2024-07-13 07:53:46.141659] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:40.493 [2024-07-13 07:53:46.141885] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.494 07:53:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:40.494 07:53:46 -- common/autotest_common.sh@852 -- # return 0 00:07:40.494 07:53:46 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:40.494 07:53:46 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:40.494 07:53:46 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:40.494 07:53:46 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:40.494 07:53:46 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:40.494 07:53:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:40.494 07:53:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:40.494 07:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:40.494 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:40.494 ************************************ 00:07:40.494 START TEST accel_assign_opcode 00:07:40.494 ************************************ 00:07:40.494 07:53:46 -- common/autotest_common.sh@1104 -- # accel_assign_opcode_test_suite 00:07:40.494 07:53:46 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:40.494 07:53:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.494 07:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:40.494 [2024-07-13 07:53:46.210270] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:40.494 07:53:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.494 07:53:46 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:40.494 07:53:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.494 07:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:40.494 [2024-07-13 07:53:46.218256] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:40.494 07:53:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.494 07:53:46 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:40.494 07:53:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.494 07:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:40.753 07:53:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.753 07:53:46 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:40.753 07:53:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:40.753 07:53:46 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:40.753 07:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:40.753 07:53:46 -- accel/accel_rpc.sh@42 -- # grep software 00:07:40.753 07:53:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:40.753 software 00:07:40.753 00:07:40.753 real 0m0.190s 00:07:40.753 user 0m0.054s 00:07:40.753 sys 0m0.015s 00:07:40.753 07:53:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:40.753 07:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:40.753 ************************************ 00:07:40.753 END TEST accel_assign_opcode 00:07:40.753 ************************************ 00:07:40.753 07:53:46 -- accel/accel_rpc.sh@55 -- # killprocess 68413 00:07:40.753 07:53:46 -- common/autotest_common.sh@926 -- # '[' -z 68413 ']' 00:07:40.753 07:53:46 -- common/autotest_common.sh@930 -- # kill -0 68413 00:07:40.753 07:53:46 -- common/autotest_common.sh@931 -- # uname 00:07:40.753 07:53:46 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:40.753 07:53:46 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68413 00:07:40.753 07:53:46 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:40.753 07:53:46 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:40.753 killing process with pid 68413 00:07:40.753 07:53:46 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68413' 00:07:40.753 07:53:46 -- common/autotest_common.sh@945 -- # kill 68413 00:07:40.753 07:53:46 -- common/autotest_common.sh@950 -- # wait 68413 00:07:41.013 00:07:41.013 real 0m0.831s 00:07:41.013 user 0m0.827s 00:07:41.013 sys 0m0.294s 00:07:41.013 07:53:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:41.013 07:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:41.013 ************************************ 00:07:41.013 END TEST accel_rpc 00:07:41.013 ************************************ 00:07:41.013 07:53:46 -- spdk/autotest.sh@191 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:41.013 07:53:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:41.013 07:53:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:41.013 07:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:41.013 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:41.013 ************************************ 00:07:41.013 START TEST app_cmdline 00:07:41.013 ************************************ 00:07:41.013 07:53:46 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:41.013 * Looking for test storage... 00:07:41.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:41.013 07:53:46 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:41.013 07:53:46 -- app/cmdline.sh@17 -- # spdk_tgt_pid=68486 00:07:41.013 07:53:46 -- app/cmdline.sh@18 -- # waitforlisten 68486 00:07:41.013 07:53:46 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:41.013 07:53:46 -- common/autotest_common.sh@819 -- # '[' -z 68486 ']' 00:07:41.013 07:53:46 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:41.013 07:53:46 -- common/autotest_common.sh@824 -- # local max_retries=100 00:07:41.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:41.013 07:53:46 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:41.013 07:53:46 -- common/autotest_common.sh@828 -- # xtrace_disable 00:07:41.013 07:53:46 -- common/autotest_common.sh@10 -- # set +x 00:07:41.272 [2024-07-13 07:53:46.852068] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:41.272 [2024-07-13 07:53:46.852173] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68486 ] 00:07:41.272 [2024-07-13 07:53:46.992503] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.272 [2024-07-13 07:53:47.032810] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:41.272 [2024-07-13 07:53:47.032999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.208 07:53:47 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:07:42.208 07:53:47 -- common/autotest_common.sh@852 -- # return 0 00:07:42.208 07:53:47 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:42.467 { 00:07:42.467 "version": "SPDK v24.01.1-pre git sha1 4b94202c6", 00:07:42.467 "fields": { 00:07:42.467 "major": 24, 00:07:42.467 "minor": 1, 00:07:42.467 "patch": 1, 00:07:42.467 "suffix": "-pre", 00:07:42.467 "commit": "4b94202c6" 00:07:42.467 } 00:07:42.467 } 00:07:42.467 07:53:48 -- app/cmdline.sh@22 -- # expected_methods=() 00:07:42.467 07:53:48 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:42.467 07:53:48 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:42.467 07:53:48 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:42.467 07:53:48 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:42.467 07:53:48 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:42.467 07:53:48 -- common/autotest_common.sh@551 -- # xtrace_disable 00:07:42.467 07:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:42.467 07:53:48 -- app/cmdline.sh@26 -- # sort 00:07:42.467 07:53:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:07:42.467 07:53:48 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:42.467 07:53:48 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:42.467 07:53:48 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.467 07:53:48 -- common/autotest_common.sh@640 -- # local es=0 00:07:42.467 07:53:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.467 07:53:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.467 07:53:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:42.467 07:53:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.467 07:53:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:42.467 07:53:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.467 07:53:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:42.467 07:53:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:42.467 07:53:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:42.467 07:53:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:42.726 request: 00:07:42.726 { 00:07:42.726 "method": "env_dpdk_get_mem_stats", 00:07:42.726 "req_id": 1 00:07:42.726 } 00:07:42.726 Got JSON-RPC error response 00:07:42.726 response: 00:07:42.726 { 00:07:42.726 "code": -32601, 00:07:42.726 "message": "Method not found" 00:07:42.726 } 00:07:42.726 07:53:48 -- common/autotest_common.sh@643 -- # es=1 00:07:42.726 07:53:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:42.726 07:53:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:07:42.726 07:53:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:42.726 07:53:48 -- app/cmdline.sh@1 -- # killprocess 68486 00:07:42.726 07:53:48 -- common/autotest_common.sh@926 -- # '[' -z 68486 ']' 00:07:42.726 07:53:48 -- common/autotest_common.sh@930 -- # kill -0 68486 00:07:42.726 07:53:48 -- common/autotest_common.sh@931 -- # uname 00:07:42.726 07:53:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:07:42.726 07:53:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 68486 00:07:42.726 07:53:48 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:07:42.726 07:53:48 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:07:42.726 killing process with pid 68486 00:07:42.726 07:53:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 68486' 00:07:42.726 07:53:48 -- common/autotest_common.sh@945 -- # kill 68486 00:07:42.726 07:53:48 -- common/autotest_common.sh@950 -- # wait 68486 00:07:42.985 00:07:42.985 real 0m1.864s 00:07:42.985 user 0m2.463s 00:07:42.985 sys 0m0.344s 00:07:42.985 07:53:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.985 07:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:42.985 ************************************ 00:07:42.985 END TEST app_cmdline 00:07:42.985 ************************************ 00:07:42.985 07:53:48 -- spdk/autotest.sh@192 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:42.985 07:53:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:42.985 07:53:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:42.985 07:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:42.985 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:42.985 ************************************ 00:07:42.986 START TEST version 00:07:42.986 ************************************ 00:07:42.986 07:53:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:42.986 * Looking for test storage... 00:07:42.986 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:42.986 07:53:48 -- app/version.sh@17 -- # get_header_version major 00:07:42.986 07:53:48 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:42.986 07:53:48 -- app/version.sh@14 -- # cut -f2 00:07:42.986 07:53:48 -- app/version.sh@14 -- # tr -d '"' 00:07:42.986 07:53:48 -- app/version.sh@17 -- # major=24 00:07:42.986 07:53:48 -- app/version.sh@18 -- # get_header_version minor 00:07:42.986 07:53:48 -- app/version.sh@14 -- # cut -f2 00:07:42.986 07:53:48 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:42.986 07:53:48 -- app/version.sh@14 -- # tr -d '"' 00:07:42.986 07:53:48 -- app/version.sh@18 -- # minor=1 00:07:42.986 07:53:48 -- app/version.sh@19 -- # get_header_version patch 00:07:42.986 07:53:48 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:42.986 07:53:48 -- app/version.sh@14 -- # cut -f2 00:07:42.986 07:53:48 -- app/version.sh@14 -- # tr -d '"' 00:07:42.986 07:53:48 -- app/version.sh@19 -- # patch=1 00:07:42.986 07:53:48 -- app/version.sh@20 -- # get_header_version suffix 00:07:42.986 07:53:48 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:42.986 07:53:48 -- app/version.sh@14 -- # cut -f2 00:07:42.986 07:53:48 -- app/version.sh@14 -- # tr -d '"' 00:07:42.986 07:53:48 -- app/version.sh@20 -- # suffix=-pre 00:07:42.986 07:53:48 -- app/version.sh@22 -- # version=24.1 00:07:42.986 07:53:48 -- app/version.sh@25 -- # (( patch != 0 )) 00:07:42.986 07:53:48 -- app/version.sh@25 -- # version=24.1.1 00:07:42.986 07:53:48 -- app/version.sh@28 -- # version=24.1.1rc0 00:07:42.986 07:53:48 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:42.986 07:53:48 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:42.986 07:53:48 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:07:42.986 07:53:48 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:07:42.986 00:07:42.986 real 0m0.141s 00:07:42.986 user 0m0.086s 00:07:42.986 sys 0m0.089s 00:07:42.986 07:53:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.986 ************************************ 00:07:42.986 END TEST version 00:07:42.986 07:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:42.986 ************************************ 00:07:43.245 07:53:48 -- spdk/autotest.sh@194 -- # '[' 0 -eq 1 ']' 00:07:43.245 07:53:48 -- spdk/autotest.sh@204 -- # uname -s 00:07:43.245 07:53:48 -- spdk/autotest.sh@204 -- # [[ Linux == Linux ]] 00:07:43.245 07:53:48 -- spdk/autotest.sh@205 -- # [[ 0 -eq 1 ]] 00:07:43.245 07:53:48 -- spdk/autotest.sh@205 -- # [[ 1 -eq 1 ]] 00:07:43.245 07:53:48 -- spdk/autotest.sh@211 -- # [[ 0 -eq 0 ]] 00:07:43.245 07:53:48 -- spdk/autotest.sh@212 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:43.245 07:53:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:43.245 07:53:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.245 07:53:48 -- common/autotest_common.sh@10 -- # set +x 00:07:43.245 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:43.245 ************************************ 00:07:43.245 START TEST spdk_dd 00:07:43.245 ************************************ 00:07:43.245 07:53:48 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:43.245 * Looking for test storage... 00:07:43.245 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:43.245 07:53:48 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.245 07:53:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.245 07:53:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.245 07:53:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.245 07:53:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.245 07:53:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.245 07:53:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.245 07:53:48 -- paths/export.sh@5 -- # export PATH 00:07:43.245 07:53:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.245 07:53:48 -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:43.504 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:43.504 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:43.504 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:43.504 07:53:49 -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:43.504 07:53:49 -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:43.504 07:53:49 -- scripts/common.sh@311 -- # local bdf bdfs 00:07:43.504 07:53:49 -- scripts/common.sh@312 -- # local nvmes 00:07:43.504 07:53:49 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:07:43.505 07:53:49 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:43.505 07:53:49 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:07:43.505 07:53:49 -- scripts/common.sh@297 -- # local bdf= 00:07:43.505 07:53:49 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:07:43.505 07:53:49 -- scripts/common.sh@232 -- # local class 00:07:43.505 07:53:49 -- scripts/common.sh@233 -- # local subclass 00:07:43.505 07:53:49 -- scripts/common.sh@234 -- # local progif 00:07:43.505 07:53:49 -- scripts/common.sh@235 -- # printf %02x 1 00:07:43.505 07:53:49 -- scripts/common.sh@235 -- # class=01 00:07:43.505 07:53:49 -- scripts/common.sh@236 -- # printf %02x 8 00:07:43.505 07:53:49 -- scripts/common.sh@236 -- # subclass=08 00:07:43.505 07:53:49 -- scripts/common.sh@237 -- # printf %02x 2 00:07:43.505 07:53:49 -- scripts/common.sh@237 -- # progif=02 00:07:43.505 07:53:49 -- scripts/common.sh@239 -- # hash lspci 00:07:43.505 07:53:49 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:07:43.505 07:53:49 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:07:43.505 07:53:49 -- scripts/common.sh@242 -- # grep -i -- -p02 00:07:43.505 07:53:49 -- scripts/common.sh@244 -- # tr -d '"' 00:07:43.505 07:53:49 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:43.505 07:53:49 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:43.505 07:53:49 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:07:43.505 07:53:49 -- scripts/common.sh@15 -- # local i 00:07:43.505 07:53:49 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:07:43.505 07:53:49 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:43.505 07:53:49 -- scripts/common.sh@24 -- # return 0 00:07:43.505 07:53:49 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:07:43.505 07:53:49 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:43.505 07:53:49 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:07:43.505 07:53:49 -- scripts/common.sh@15 -- # local i 00:07:43.505 07:53:49 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:07:43.765 07:53:49 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:07:43.765 07:53:49 -- scripts/common.sh@24 -- # return 0 00:07:43.766 07:53:49 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:07:43.766 07:53:49 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:43.766 07:53:49 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:07:43.766 07:53:49 -- scripts/common.sh@322 -- # uname -s 00:07:43.766 07:53:49 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:43.766 07:53:49 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:43.766 07:53:49 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:07:43.766 07:53:49 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:07:43.766 07:53:49 -- scripts/common.sh@322 -- # uname -s 00:07:43.766 07:53:49 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:07:43.766 07:53:49 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:07:43.766 07:53:49 -- scripts/common.sh@327 -- # (( 2 )) 00:07:43.766 07:53:49 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:07:43.766 07:53:49 -- dd/dd.sh@13 -- # check_liburing 00:07:43.766 07:53:49 -- dd/common.sh@139 -- # local lib so 00:07:43.766 07:53:49 -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@137 -- # LD_TRACE_LOADED_OBJECTS=1 00:07:43.766 07:53:49 -- dd/common.sh@137 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ linux-vdso.so.1 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.6.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.9.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.10.1 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_lvol.so.9.1 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_blob.so.10.1 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_nvme.so.12.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_rdma.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_ftl.so.8.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_virtio.so.6.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.4.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.1.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_ioat.so.6.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.4.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.2.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_idxd.so.11.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.3.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.13.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.3.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.3.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.4.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_event.so.12.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_bdev.so.14.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_notify.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_accel.so.14.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_dma.so.3.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_vmd.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.4.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_sock.so.8.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.2.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_init.so.4.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_thread.so.9.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_trace.so.9.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_rpc.so.5.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.5.1 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_json.so.5.1 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_util.so.8.0 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ libspdk_log.so.6.1 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ librte_bus_pci.so.23 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ librte_cryptodev.so.23 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ librte_dmadev.so.23 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ librte_eal.so.23 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ librte_ethdev.so.23 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ librte_hash.so.23 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ librte_kvargs.so.23 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ librte_mbuf.so.23 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ librte_mempool.so.23 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.23 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ librte_net.so.23 == liburing.so.* ]] 00:07:43.766 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.766 07:53:49 -- dd/common.sh@143 -- # [[ librte_pci.so.23 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.767 07:53:49 -- dd/common.sh@143 -- # [[ librte_power.so.23 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.767 07:53:49 -- dd/common.sh@143 -- # [[ librte_rcu.so.23 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.767 07:53:49 -- dd/common.sh@143 -- # [[ librte_ring.so.23 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.767 07:53:49 -- dd/common.sh@143 -- # [[ librte_telemetry.so.23 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.767 07:53:49 -- dd/common.sh@143 -- # [[ librte_vhost.so.23 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.767 07:53:49 -- dd/common.sh@143 -- # [[ libisal_crypto.so.2 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.767 07:53:49 -- dd/common.sh@143 -- # [[ libaccel-config.so.1 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.767 07:53:49 -- dd/common.sh@143 -- # [[ libaio.so.1 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.767 07:53:49 -- dd/common.sh@143 -- # [[ libiscsi.so.9 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.767 07:53:49 -- dd/common.sh@143 -- # [[ libubsan.so.1 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.767 07:53:49 -- dd/common.sh@143 -- # [[ libc.so.6 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.767 07:53:49 -- dd/common.sh@143 -- # [[ libibverbs.so.1 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.767 07:53:49 -- dd/common.sh@143 -- # [[ librdmacm.so.1 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.767 07:53:49 -- dd/common.sh@143 -- # [[ libfuse3.so.3 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.767 07:53:49 -- dd/common.sh@143 -- # [[ /lib64/ld-linux-x86-64.so.2 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@142 -- # read -r lib _ so _ 00:07:43.767 07:53:49 -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:43.767 07:53:49 -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:43.767 * spdk_dd linked to liburing 00:07:43.767 07:53:49 -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:43.767 07:53:49 -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:43.767 07:53:49 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:43.767 07:53:49 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:43.767 07:53:49 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:43.767 07:53:49 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:43.767 07:53:49 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:43.767 07:53:49 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:43.767 07:53:49 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:43.767 07:53:49 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:43.767 07:53:49 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:43.767 07:53:49 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:43.767 07:53:49 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:43.767 07:53:49 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:43.767 07:53:49 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:43.767 07:53:49 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:43.767 07:53:49 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:43.767 07:53:49 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:43.767 07:53:49 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:43.767 07:53:49 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:43.767 07:53:49 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:43.767 07:53:49 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:43.767 07:53:49 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:43.767 07:53:49 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:43.767 07:53:49 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:43.767 07:53:49 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:43.767 07:53:49 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:43.767 07:53:49 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:43.767 07:53:49 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:43.767 07:53:49 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:43.767 07:53:49 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:43.767 07:53:49 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:43.767 07:53:49 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:43.767 07:53:49 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:43.767 07:53:49 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:43.767 07:53:49 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:43.767 07:53:49 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:43.767 07:53:49 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/dpdk/build 00:07:43.767 07:53:49 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:43.767 07:53:49 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:43.767 07:53:49 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:43.767 07:53:49 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:43.767 07:53:49 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//home/vagrant/spdk_repo/dpdk/build/include 00:07:43.767 07:53:49 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:43.767 07:53:49 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:43.767 07:53:49 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:43.767 07:53:49 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:43.767 07:53:49 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:07:43.767 07:53:49 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:07:43.767 07:53:49 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:43.767 07:53:49 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:07:43.767 07:53:49 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:07:43.767 07:53:49 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:07:43.767 07:53:49 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:07:43.767 07:53:49 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=y 00:07:43.767 07:53:49 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:07:43.767 07:53:49 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:07:43.767 07:53:49 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:07:43.767 07:53:49 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:07:43.767 07:53:49 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:07:43.767 07:53:49 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:07:43.767 07:53:49 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:07:43.767 07:53:49 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/home/vagrant/spdk_repo/dpdk/build/lib 00:07:43.767 07:53:49 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:07:43.767 07:53:49 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:07:43.767 07:53:49 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:07:43.767 07:53:49 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:07:43.767 07:53:49 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:43.767 07:53:49 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:07:43.767 07:53:49 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:07:43.767 07:53:49 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:07:43.767 07:53:49 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:07:43.767 07:53:49 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:07:43.767 07:53:49 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:07:43.767 07:53:49 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:07:43.767 07:53:49 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:07:43.767 07:53:49 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:07:43.767 07:53:49 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:07:43.767 07:53:49 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:43.767 07:53:49 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:07:43.767 07:53:49 -- common/build_config.sh@79 -- # CONFIG_URING=y 00:07:43.767 07:53:49 -- dd/common.sh@149 -- # [[ y != y ]] 00:07:43.767 07:53:49 -- dd/common.sh@152 -- # [[ ! -e /usr/lib64/liburing.so.2 ]] 00:07:43.767 07:53:49 -- dd/common.sh@156 -- # export liburing_in_use=1 00:07:43.767 07:53:49 -- dd/common.sh@156 -- # liburing_in_use=1 00:07:43.767 07:53:49 -- dd/common.sh@157 -- # return 0 00:07:43.767 07:53:49 -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:43.767 07:53:49 -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:43.767 07:53:49 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:07:43.767 07:53:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:43.767 07:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:43.767 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:43.767 ************************************ 00:07:43.767 START TEST spdk_dd_basic_rw 00:07:43.767 ************************************ 00:07:43.767 07:53:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:06.0 0000:00:07.0 00:07:43.767 * Looking for test storage... 00:07:43.767 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:43.767 07:53:49 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:43.767 07:53:49 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:43.767 07:53:49 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:43.767 07:53:49 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:43.767 07:53:49 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.767 07:53:49 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.767 07:53:49 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.767 07:53:49 -- paths/export.sh@5 -- # export PATH 00:07:43.767 07:53:49 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:43.767 07:53:49 -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:43.767 07:53:49 -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:43.767 07:53:49 -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:43.767 07:53:49 -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:06.0 00:07:43.768 07:53:49 -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:43.768 07:53:49 -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:07:43.768 07:53:49 -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:43.768 07:53:49 -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:43.768 07:53:49 -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:43.768 07:53:49 -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:06.0 00:07:43.768 07:53:49 -- dd/common.sh@124 -- # local pci=0000:00:06.0 lbaf id 00:07:43.768 07:53:49 -- dd/common.sh@126 -- # mapfile -t id 00:07:43.768 07:53:49 -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:06.0' 00:07:44.029 07:53:49 -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 98 Data Units Written: 7 Host Read Commands: 2086 Host Write Commands: 92 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:44.029 07:53:49 -- dd/common.sh@130 -- # lbaf=04 00:07:44.030 07:53:49 -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:06.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 98 Data Units Written: 7 Host Read Commands: 2086 Host Write Commands: 92 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:44.030 07:53:49 -- dd/common.sh@132 -- # lbaf=4096 00:07:44.030 07:53:49 -- dd/common.sh@134 -- # echo 4096 00:07:44.030 07:53:49 -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:44.030 07:53:49 -- dd/basic_rw.sh@96 -- # : 00:07:44.030 07:53:49 -- dd/basic_rw.sh@96 -- # gen_conf 00:07:44.030 07:53:49 -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:44.030 07:53:49 -- dd/common.sh@31 -- # xtrace_disable 00:07:44.030 07:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:44.030 07:53:49 -- common/autotest_common.sh@1077 -- # '[' 8 -le 1 ']' 00:07:44.030 07:53:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.030 07:53:49 -- common/autotest_common.sh@10 -- # set +x 00:07:44.030 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:44.030 ************************************ 00:07:44.030 START TEST dd_bs_lt_native_bs 00:07:44.030 ************************************ 00:07:44.030 07:53:49 -- common/autotest_common.sh@1104 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:44.030 07:53:49 -- common/autotest_common.sh@640 -- # local es=0 00:07:44.030 07:53:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:44.030 07:53:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.030 07:53:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.030 07:53:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.030 07:53:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.030 07:53:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.030 07:53:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:44.030 07:53:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:44.030 07:53:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:44.030 07:53:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:44.030 { 00:07:44.030 "subsystems": [ 00:07:44.030 { 00:07:44.030 "subsystem": "bdev", 00:07:44.030 "config": [ 00:07:44.030 { 00:07:44.030 "params": { 00:07:44.030 "trtype": "pcie", 00:07:44.030 "traddr": "0000:00:06.0", 00:07:44.030 "name": "Nvme0" 00:07:44.030 }, 00:07:44.030 "method": "bdev_nvme_attach_controller" 00:07:44.030 }, 00:07:44.030 { 00:07:44.030 "method": "bdev_wait_for_examine" 00:07:44.030 } 00:07:44.030 ] 00:07:44.030 } 00:07:44.030 ] 00:07:44.030 } 00:07:44.030 [2024-07-13 07:53:49.713406] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:44.030 [2024-07-13 07:53:49.713499] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68788 ] 00:07:44.290 [2024-07-13 07:53:49.852948] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.290 [2024-07-13 07:53:49.892258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.290 [2024-07-13 07:53:50.007322] spdk_dd.c:1145:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:44.290 [2024-07-13 07:53:50.007403] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:44.290 [2024-07-13 07:53:50.078365] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:44.551 07:53:50 -- common/autotest_common.sh@643 -- # es=234 00:07:44.551 07:53:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:44.551 07:53:50 -- common/autotest_common.sh@652 -- # es=106 00:07:44.551 07:53:50 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:44.551 07:53:50 -- common/autotest_common.sh@660 -- # es=1 00:07:44.551 07:53:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:44.551 00:07:44.551 real 0m0.485s 00:07:44.551 user 0m0.343s 00:07:44.551 sys 0m0.097s 00:07:44.551 07:53:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.551 07:53:50 -- common/autotest_common.sh@10 -- # set +x 00:07:44.551 ************************************ 00:07:44.551 END TEST dd_bs_lt_native_bs 00:07:44.551 ************************************ 00:07:44.551 07:53:50 -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:44.551 07:53:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:07:44.551 07:53:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:44.551 07:53:50 -- common/autotest_common.sh@10 -- # set +x 00:07:44.551 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:44.551 ************************************ 00:07:44.551 START TEST dd_rw 00:07:44.551 ************************************ 00:07:44.551 07:53:50 -- common/autotest_common.sh@1104 -- # basic_rw 4096 00:07:44.551 07:53:50 -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:44.551 07:53:50 -- dd/basic_rw.sh@12 -- # local count size 00:07:44.551 07:53:50 -- dd/basic_rw.sh@13 -- # local qds bss 00:07:44.551 07:53:50 -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:44.551 07:53:50 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:44.551 07:53:50 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:44.551 07:53:50 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:44.551 07:53:50 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:44.551 07:53:50 -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:44.551 07:53:50 -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:44.551 07:53:50 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:44.551 07:53:50 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:44.551 07:53:50 -- dd/basic_rw.sh@23 -- # count=15 00:07:44.551 07:53:50 -- dd/basic_rw.sh@24 -- # count=15 00:07:44.551 07:53:50 -- dd/basic_rw.sh@25 -- # size=61440 00:07:44.551 07:53:50 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:44.551 07:53:50 -- dd/common.sh@98 -- # xtrace_disable 00:07:44.551 07:53:50 -- common/autotest_common.sh@10 -- # set +x 00:07:45.138 07:53:50 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:45.138 07:53:50 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:45.138 07:53:50 -- dd/common.sh@31 -- # xtrace_disable 00:07:45.138 07:53:50 -- common/autotest_common.sh@10 -- # set +x 00:07:45.138 [2024-07-13 07:53:50.855036] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:45.138 [2024-07-13 07:53:50.855135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68813 ] 00:07:45.138 { 00:07:45.138 "subsystems": [ 00:07:45.138 { 00:07:45.138 "subsystem": "bdev", 00:07:45.138 "config": [ 00:07:45.138 { 00:07:45.138 "params": { 00:07:45.138 "trtype": "pcie", 00:07:45.138 "traddr": "0000:00:06.0", 00:07:45.138 "name": "Nvme0" 00:07:45.138 }, 00:07:45.138 "method": "bdev_nvme_attach_controller" 00:07:45.138 }, 00:07:45.138 { 00:07:45.138 "method": "bdev_wait_for_examine" 00:07:45.138 } 00:07:45.138 ] 00:07:45.138 } 00:07:45.138 ] 00:07:45.138 } 00:07:45.398 [2024-07-13 07:53:50.991257] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.398 [2024-07-13 07:53:51.020684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.658  Copying: 60/60 [kB] (average 19 MBps) 00:07:45.658 00:07:45.658 07:53:51 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:45.658 07:53:51 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:45.658 07:53:51 -- dd/common.sh@31 -- # xtrace_disable 00:07:45.658 07:53:51 -- common/autotest_common.sh@10 -- # set +x 00:07:45.658 { 00:07:45.658 "subsystems": [ 00:07:45.658 { 00:07:45.658 "subsystem": "bdev", 00:07:45.658 "config": [ 00:07:45.658 { 00:07:45.658 "params": { 00:07:45.658 "trtype": "pcie", 00:07:45.658 "traddr": "0000:00:06.0", 00:07:45.658 "name": "Nvme0" 00:07:45.658 }, 00:07:45.658 "method": "bdev_nvme_attach_controller" 00:07:45.658 }, 00:07:45.658 { 00:07:45.658 "method": "bdev_wait_for_examine" 00:07:45.658 } 00:07:45.658 ] 00:07:45.658 } 00:07:45.658 ] 00:07:45.658 } 00:07:45.658 [2024-07-13 07:53:51.326809] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:45.658 [2024-07-13 07:53:51.326912] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68825 ] 00:07:45.658 [2024-07-13 07:53:51.462283] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.918 [2024-07-13 07:53:51.492522] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.177  Copying: 60/60 [kB] (average 29 MBps) 00:07:46.177 00:07:46.177 07:53:51 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.177 07:53:51 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:46.177 07:53:51 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:46.177 07:53:51 -- dd/common.sh@11 -- # local nvme_ref= 00:07:46.177 07:53:51 -- dd/common.sh@12 -- # local size=61440 00:07:46.177 07:53:51 -- dd/common.sh@14 -- # local bs=1048576 00:07:46.177 07:53:51 -- dd/common.sh@15 -- # local count=1 00:07:46.177 07:53:51 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:46.177 07:53:51 -- dd/common.sh@18 -- # gen_conf 00:07:46.177 07:53:51 -- dd/common.sh@31 -- # xtrace_disable 00:07:46.177 07:53:51 -- common/autotest_common.sh@10 -- # set +x 00:07:46.177 [2024-07-13 07:53:51.805323] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:46.177 [2024-07-13 07:53:51.805414] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68833 ] 00:07:46.177 { 00:07:46.177 "subsystems": [ 00:07:46.177 { 00:07:46.177 "subsystem": "bdev", 00:07:46.177 "config": [ 00:07:46.177 { 00:07:46.177 "params": { 00:07:46.177 "trtype": "pcie", 00:07:46.177 "traddr": "0000:00:06.0", 00:07:46.177 "name": "Nvme0" 00:07:46.177 }, 00:07:46.177 "method": "bdev_nvme_attach_controller" 00:07:46.177 }, 00:07:46.177 { 00:07:46.177 "method": "bdev_wait_for_examine" 00:07:46.177 } 00:07:46.177 ] 00:07:46.177 } 00:07:46.177 ] 00:07:46.177 } 00:07:46.177 [2024-07-13 07:53:51.942420] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.177 [2024-07-13 07:53:51.973117] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.436  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:46.436 00:07:46.436 07:53:52 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:46.436 07:53:52 -- dd/basic_rw.sh@23 -- # count=15 00:07:46.436 07:53:52 -- dd/basic_rw.sh@24 -- # count=15 00:07:46.436 07:53:52 -- dd/basic_rw.sh@25 -- # size=61440 00:07:46.436 07:53:52 -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:46.436 07:53:52 -- dd/common.sh@98 -- # xtrace_disable 00:07:46.436 07:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:47.005 07:53:52 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:47.005 07:53:52 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:47.005 07:53:52 -- dd/common.sh@31 -- # xtrace_disable 00:07:47.005 07:53:52 -- common/autotest_common.sh@10 -- # set +x 00:07:47.262 [2024-07-13 07:53:52.823306] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:47.262 [2024-07-13 07:53:52.823408] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68845 ] 00:07:47.262 { 00:07:47.262 "subsystems": [ 00:07:47.262 { 00:07:47.262 "subsystem": "bdev", 00:07:47.262 "config": [ 00:07:47.262 { 00:07:47.262 "params": { 00:07:47.262 "trtype": "pcie", 00:07:47.262 "traddr": "0000:00:06.0", 00:07:47.262 "name": "Nvme0" 00:07:47.262 }, 00:07:47.262 "method": "bdev_nvme_attach_controller" 00:07:47.262 }, 00:07:47.262 { 00:07:47.262 "method": "bdev_wait_for_examine" 00:07:47.262 } 00:07:47.262 ] 00:07:47.262 } 00:07:47.262 ] 00:07:47.262 } 00:07:47.262 [2024-07-13 07:53:52.961091] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.262 [2024-07-13 07:53:52.993736] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.520  Copying: 60/60 [kB] (average 58 MBps) 00:07:47.520 00:07:47.520 07:53:53 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:47.520 07:53:53 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:47.520 07:53:53 -- dd/common.sh@31 -- # xtrace_disable 00:07:47.520 07:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:47.520 [2024-07-13 07:53:53.306512] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:47.520 [2024-07-13 07:53:53.306636] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68852 ] 00:07:47.520 { 00:07:47.520 "subsystems": [ 00:07:47.520 { 00:07:47.520 "subsystem": "bdev", 00:07:47.520 "config": [ 00:07:47.520 { 00:07:47.520 "params": { 00:07:47.520 "trtype": "pcie", 00:07:47.520 "traddr": "0000:00:06.0", 00:07:47.520 "name": "Nvme0" 00:07:47.520 }, 00:07:47.520 "method": "bdev_nvme_attach_controller" 00:07:47.520 }, 00:07:47.520 { 00:07:47.520 "method": "bdev_wait_for_examine" 00:07:47.520 } 00:07:47.520 ] 00:07:47.520 } 00:07:47.520 ] 00:07:47.520 } 00:07:47.779 [2024-07-13 07:53:53.445634] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.779 [2024-07-13 07:53:53.475037] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.037  Copying: 60/60 [kB] (average 58 MBps) 00:07:48.037 00:07:48.037 07:53:53 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:48.038 07:53:53 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:48.038 07:53:53 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:48.038 07:53:53 -- dd/common.sh@11 -- # local nvme_ref= 00:07:48.038 07:53:53 -- dd/common.sh@12 -- # local size=61440 00:07:48.038 07:53:53 -- dd/common.sh@14 -- # local bs=1048576 00:07:48.038 07:53:53 -- dd/common.sh@15 -- # local count=1 00:07:48.038 07:53:53 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:48.038 07:53:53 -- dd/common.sh@18 -- # gen_conf 00:07:48.038 07:53:53 -- dd/common.sh@31 -- # xtrace_disable 00:07:48.038 07:53:53 -- common/autotest_common.sh@10 -- # set +x 00:07:48.038 [2024-07-13 07:53:53.765880] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:48.038 [2024-07-13 07:53:53.765973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68865 ] 00:07:48.038 { 00:07:48.038 "subsystems": [ 00:07:48.038 { 00:07:48.038 "subsystem": "bdev", 00:07:48.038 "config": [ 00:07:48.038 { 00:07:48.038 "params": { 00:07:48.038 "trtype": "pcie", 00:07:48.038 "traddr": "0000:00:06.0", 00:07:48.038 "name": "Nvme0" 00:07:48.038 }, 00:07:48.038 "method": "bdev_nvme_attach_controller" 00:07:48.038 }, 00:07:48.038 { 00:07:48.038 "method": "bdev_wait_for_examine" 00:07:48.038 } 00:07:48.038 ] 00:07:48.038 } 00:07:48.038 ] 00:07:48.038 } 00:07:48.296 [2024-07-13 07:53:53.902197] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.296 [2024-07-13 07:53:53.931487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.556  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:48.556 00:07:48.556 07:53:54 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:48.556 07:53:54 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:48.556 07:53:54 -- dd/basic_rw.sh@23 -- # count=7 00:07:48.556 07:53:54 -- dd/basic_rw.sh@24 -- # count=7 00:07:48.556 07:53:54 -- dd/basic_rw.sh@25 -- # size=57344 00:07:48.556 07:53:54 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:48.556 07:53:54 -- dd/common.sh@98 -- # xtrace_disable 00:07:48.556 07:53:54 -- common/autotest_common.sh@10 -- # set +x 00:07:49.124 07:53:54 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:49.124 07:53:54 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:49.124 07:53:54 -- dd/common.sh@31 -- # xtrace_disable 00:07:49.124 07:53:54 -- common/autotest_common.sh@10 -- # set +x 00:07:49.124 [2024-07-13 07:53:54.765643] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:49.124 [2024-07-13 07:53:54.765755] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68877 ] 00:07:49.124 { 00:07:49.124 "subsystems": [ 00:07:49.124 { 00:07:49.124 "subsystem": "bdev", 00:07:49.124 "config": [ 00:07:49.124 { 00:07:49.124 "params": { 00:07:49.124 "trtype": "pcie", 00:07:49.124 "traddr": "0000:00:06.0", 00:07:49.124 "name": "Nvme0" 00:07:49.124 }, 00:07:49.124 "method": "bdev_nvme_attach_controller" 00:07:49.124 }, 00:07:49.124 { 00:07:49.124 "method": "bdev_wait_for_examine" 00:07:49.124 } 00:07:49.124 ] 00:07:49.124 } 00:07:49.124 ] 00:07:49.124 } 00:07:49.124 [2024-07-13 07:53:54.901309] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.124 [2024-07-13 07:53:54.931861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.383  Copying: 56/56 [kB] (average 27 MBps) 00:07:49.383 00:07:49.383 07:53:55 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:49.383 07:53:55 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:49.383 07:53:55 -- dd/common.sh@31 -- # xtrace_disable 00:07:49.383 07:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:49.642 [2024-07-13 07:53:55.230241] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:49.642 [2024-07-13 07:53:55.230329] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68884 ] 00:07:49.642 { 00:07:49.642 "subsystems": [ 00:07:49.642 { 00:07:49.642 "subsystem": "bdev", 00:07:49.642 "config": [ 00:07:49.642 { 00:07:49.642 "params": { 00:07:49.642 "trtype": "pcie", 00:07:49.642 "traddr": "0000:00:06.0", 00:07:49.642 "name": "Nvme0" 00:07:49.642 }, 00:07:49.642 "method": "bdev_nvme_attach_controller" 00:07:49.642 }, 00:07:49.642 { 00:07:49.642 "method": "bdev_wait_for_examine" 00:07:49.642 } 00:07:49.642 ] 00:07:49.642 } 00:07:49.642 ] 00:07:49.642 } 00:07:49.642 [2024-07-13 07:53:55.366344] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.642 [2024-07-13 07:53:55.396088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.901  Copying: 56/56 [kB] (average 27 MBps) 00:07:49.901 00:07:49.901 07:53:55 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:49.901 07:53:55 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:49.901 07:53:55 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:49.901 07:53:55 -- dd/common.sh@11 -- # local nvme_ref= 00:07:49.901 07:53:55 -- dd/common.sh@12 -- # local size=57344 00:07:49.901 07:53:55 -- dd/common.sh@14 -- # local bs=1048576 00:07:49.901 07:53:55 -- dd/common.sh@15 -- # local count=1 00:07:49.901 07:53:55 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:49.901 07:53:55 -- dd/common.sh@18 -- # gen_conf 00:07:49.901 07:53:55 -- dd/common.sh@31 -- # xtrace_disable 00:07:49.901 07:53:55 -- common/autotest_common.sh@10 -- # set +x 00:07:49.901 [2024-07-13 07:53:55.704940] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:49.901 [2024-07-13 07:53:55.705071] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68897 ] 00:07:49.901 { 00:07:49.901 "subsystems": [ 00:07:49.901 { 00:07:49.901 "subsystem": "bdev", 00:07:49.901 "config": [ 00:07:49.901 { 00:07:49.901 "params": { 00:07:49.901 "trtype": "pcie", 00:07:49.901 "traddr": "0000:00:06.0", 00:07:49.901 "name": "Nvme0" 00:07:49.901 }, 00:07:49.901 "method": "bdev_nvme_attach_controller" 00:07:49.901 }, 00:07:49.901 { 00:07:49.901 "method": "bdev_wait_for_examine" 00:07:49.901 } 00:07:49.901 ] 00:07:49.901 } 00:07:49.901 ] 00:07:49.901 } 00:07:50.159 [2024-07-13 07:53:55.843294] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.159 [2024-07-13 07:53:55.879201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.417  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:50.417 00:07:50.417 07:53:56 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:50.417 07:53:56 -- dd/basic_rw.sh@23 -- # count=7 00:07:50.417 07:53:56 -- dd/basic_rw.sh@24 -- # count=7 00:07:50.417 07:53:56 -- dd/basic_rw.sh@25 -- # size=57344 00:07:50.417 07:53:56 -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:50.417 07:53:56 -- dd/common.sh@98 -- # xtrace_disable 00:07:50.417 07:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:50.984 07:53:56 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:50.984 07:53:56 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:50.984 07:53:56 -- dd/common.sh@31 -- # xtrace_disable 00:07:50.984 07:53:56 -- common/autotest_common.sh@10 -- # set +x 00:07:50.984 [2024-07-13 07:53:56.686464] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:50.984 [2024-07-13 07:53:56.687113] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68909 ] 00:07:50.984 { 00:07:50.984 "subsystems": [ 00:07:50.984 { 00:07:50.984 "subsystem": "bdev", 00:07:50.984 "config": [ 00:07:50.984 { 00:07:50.984 "params": { 00:07:50.984 "trtype": "pcie", 00:07:50.984 "traddr": "0000:00:06.0", 00:07:50.984 "name": "Nvme0" 00:07:50.984 }, 00:07:50.984 "method": "bdev_nvme_attach_controller" 00:07:50.984 }, 00:07:50.984 { 00:07:50.984 "method": "bdev_wait_for_examine" 00:07:50.984 } 00:07:50.984 ] 00:07:50.984 } 00:07:50.984 ] 00:07:50.984 } 00:07:51.243 [2024-07-13 07:53:56.823281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.243 [2024-07-13 07:53:56.853218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.502  Copying: 56/56 [kB] (average 54 MBps) 00:07:51.502 00:07:51.502 07:53:57 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:51.502 07:53:57 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:51.502 07:53:57 -- dd/common.sh@31 -- # xtrace_disable 00:07:51.502 07:53:57 -- common/autotest_common.sh@10 -- # set +x 00:07:51.502 [2024-07-13 07:53:57.148249] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:51.502 [2024-07-13 07:53:57.148380] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68916 ] 00:07:51.502 { 00:07:51.502 "subsystems": [ 00:07:51.502 { 00:07:51.502 "subsystem": "bdev", 00:07:51.502 "config": [ 00:07:51.502 { 00:07:51.502 "params": { 00:07:51.502 "trtype": "pcie", 00:07:51.502 "traddr": "0000:00:06.0", 00:07:51.502 "name": "Nvme0" 00:07:51.502 }, 00:07:51.502 "method": "bdev_nvme_attach_controller" 00:07:51.502 }, 00:07:51.502 { 00:07:51.502 "method": "bdev_wait_for_examine" 00:07:51.502 } 00:07:51.502 ] 00:07:51.502 } 00:07:51.502 ] 00:07:51.502 } 00:07:51.502 [2024-07-13 07:53:57.286476] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.761 [2024-07-13 07:53:57.317256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.020  Copying: 56/56 [kB] (average 54 MBps) 00:07:52.020 00:07:52.020 07:53:57 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:52.020 07:53:57 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:52.020 07:53:57 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:52.020 07:53:57 -- dd/common.sh@11 -- # local nvme_ref= 00:07:52.020 07:53:57 -- dd/common.sh@12 -- # local size=57344 00:07:52.020 07:53:57 -- dd/common.sh@14 -- # local bs=1048576 00:07:52.020 07:53:57 -- dd/common.sh@15 -- # local count=1 00:07:52.020 07:53:57 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:52.020 07:53:57 -- dd/common.sh@18 -- # gen_conf 00:07:52.020 07:53:57 -- dd/common.sh@31 -- # xtrace_disable 00:07:52.020 07:53:57 -- common/autotest_common.sh@10 -- # set +x 00:07:52.020 [2024-07-13 07:53:57.625599] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:52.020 [2024-07-13 07:53:57.625697] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68929 ] 00:07:52.020 { 00:07:52.020 "subsystems": [ 00:07:52.020 { 00:07:52.020 "subsystem": "bdev", 00:07:52.020 "config": [ 00:07:52.020 { 00:07:52.020 "params": { 00:07:52.020 "trtype": "pcie", 00:07:52.020 "traddr": "0000:00:06.0", 00:07:52.020 "name": "Nvme0" 00:07:52.020 }, 00:07:52.020 "method": "bdev_nvme_attach_controller" 00:07:52.020 }, 00:07:52.020 { 00:07:52.020 "method": "bdev_wait_for_examine" 00:07:52.020 } 00:07:52.020 ] 00:07:52.020 } 00:07:52.020 ] 00:07:52.020 } 00:07:52.020 [2024-07-13 07:53:57.757402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.020 [2024-07-13 07:53:57.787733] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.279  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:52.279 00:07:52.279 07:53:58 -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:52.279 07:53:58 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:52.279 07:53:58 -- dd/basic_rw.sh@23 -- # count=3 00:07:52.279 07:53:58 -- dd/basic_rw.sh@24 -- # count=3 00:07:52.279 07:53:58 -- dd/basic_rw.sh@25 -- # size=49152 00:07:52.279 07:53:58 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:52.279 07:53:58 -- dd/common.sh@98 -- # xtrace_disable 00:07:52.279 07:53:58 -- common/autotest_common.sh@10 -- # set +x 00:07:52.846 07:53:58 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:52.846 07:53:58 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:52.846 07:53:58 -- dd/common.sh@31 -- # xtrace_disable 00:07:52.846 07:53:58 -- common/autotest_common.sh@10 -- # set +x 00:07:52.846 [2024-07-13 07:53:58.528250] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:52.846 [2024-07-13 07:53:58.528371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68936 ] 00:07:52.846 { 00:07:52.846 "subsystems": [ 00:07:52.846 { 00:07:52.846 "subsystem": "bdev", 00:07:52.846 "config": [ 00:07:52.846 { 00:07:52.846 "params": { 00:07:52.846 "trtype": "pcie", 00:07:52.846 "traddr": "0000:00:06.0", 00:07:52.846 "name": "Nvme0" 00:07:52.846 }, 00:07:52.846 "method": "bdev_nvme_attach_controller" 00:07:52.846 }, 00:07:52.846 { 00:07:52.846 "method": "bdev_wait_for_examine" 00:07:52.846 } 00:07:52.846 ] 00:07:52.846 } 00:07:52.846 ] 00:07:52.846 } 00:07:53.105 [2024-07-13 07:53:58.664975] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.105 [2024-07-13 07:53:58.696983] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.364  Copying: 48/48 [kB] (average 46 MBps) 00:07:53.364 00:07:53.364 07:53:58 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:53.364 07:53:58 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:53.364 07:53:58 -- dd/common.sh@31 -- # xtrace_disable 00:07:53.364 07:53:58 -- common/autotest_common.sh@10 -- # set +x 00:07:53.364 [2024-07-13 07:53:59.004415] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:53.364 [2024-07-13 07:53:59.004524] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68948 ] 00:07:53.364 { 00:07:53.364 "subsystems": [ 00:07:53.364 { 00:07:53.364 "subsystem": "bdev", 00:07:53.364 "config": [ 00:07:53.364 { 00:07:53.364 "params": { 00:07:53.364 "trtype": "pcie", 00:07:53.364 "traddr": "0000:00:06.0", 00:07:53.364 "name": "Nvme0" 00:07:53.364 }, 00:07:53.364 "method": "bdev_nvme_attach_controller" 00:07:53.364 }, 00:07:53.364 { 00:07:53.364 "method": "bdev_wait_for_examine" 00:07:53.364 } 00:07:53.364 ] 00:07:53.364 } 00:07:53.364 ] 00:07:53.364 } 00:07:53.364 [2024-07-13 07:53:59.141009] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.364 [2024-07-13 07:53:59.170598] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.623  Copying: 48/48 [kB] (average 46 MBps) 00:07:53.623 00:07:53.623 07:53:59 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:53.623 07:53:59 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:53.623 07:53:59 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:53.623 07:53:59 -- dd/common.sh@11 -- # local nvme_ref= 00:07:53.623 07:53:59 -- dd/common.sh@12 -- # local size=49152 00:07:53.623 07:53:59 -- dd/common.sh@14 -- # local bs=1048576 00:07:53.623 07:53:59 -- dd/common.sh@15 -- # local count=1 00:07:53.623 07:53:59 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:53.623 07:53:59 -- dd/common.sh@18 -- # gen_conf 00:07:53.623 07:53:59 -- dd/common.sh@31 -- # xtrace_disable 00:07:53.623 07:53:59 -- common/autotest_common.sh@10 -- # set +x 00:07:53.881 [2024-07-13 07:53:59.465467] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:53.881 [2024-07-13 07:53:59.465553] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68956 ] 00:07:53.881 { 00:07:53.881 "subsystems": [ 00:07:53.881 { 00:07:53.881 "subsystem": "bdev", 00:07:53.881 "config": [ 00:07:53.881 { 00:07:53.881 "params": { 00:07:53.881 "trtype": "pcie", 00:07:53.881 "traddr": "0000:00:06.0", 00:07:53.881 "name": "Nvme0" 00:07:53.881 }, 00:07:53.881 "method": "bdev_nvme_attach_controller" 00:07:53.881 }, 00:07:53.881 { 00:07:53.881 "method": "bdev_wait_for_examine" 00:07:53.881 } 00:07:53.881 ] 00:07:53.881 } 00:07:53.881 ] 00:07:53.881 } 00:07:53.881 [2024-07-13 07:53:59.602350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.881 [2024-07-13 07:53:59.631702] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.139  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:54.139 00:07:54.139 07:53:59 -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:54.139 07:53:59 -- dd/basic_rw.sh@23 -- # count=3 00:07:54.139 07:53:59 -- dd/basic_rw.sh@24 -- # count=3 00:07:54.139 07:53:59 -- dd/basic_rw.sh@25 -- # size=49152 00:07:54.139 07:53:59 -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:54.139 07:53:59 -- dd/common.sh@98 -- # xtrace_disable 00:07:54.139 07:53:59 -- common/autotest_common.sh@10 -- # set +x 00:07:54.706 07:54:00 -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:54.706 07:54:00 -- dd/basic_rw.sh@30 -- # gen_conf 00:07:54.706 07:54:00 -- dd/common.sh@31 -- # xtrace_disable 00:07:54.706 07:54:00 -- common/autotest_common.sh@10 -- # set +x 00:07:54.706 [2024-07-13 07:54:00.410092] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:54.706 [2024-07-13 07:54:00.410235] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68968 ] 00:07:54.706 { 00:07:54.706 "subsystems": [ 00:07:54.706 { 00:07:54.706 "subsystem": "bdev", 00:07:54.706 "config": [ 00:07:54.706 { 00:07:54.706 "params": { 00:07:54.706 "trtype": "pcie", 00:07:54.706 "traddr": "0000:00:06.0", 00:07:54.706 "name": "Nvme0" 00:07:54.706 }, 00:07:54.706 "method": "bdev_nvme_attach_controller" 00:07:54.706 }, 00:07:54.706 { 00:07:54.706 "method": "bdev_wait_for_examine" 00:07:54.706 } 00:07:54.706 ] 00:07:54.706 } 00:07:54.706 ] 00:07:54.706 } 00:07:54.964 [2024-07-13 07:54:00.547347] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.964 [2024-07-13 07:54:00.577895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.222  Copying: 48/48 [kB] (average 46 MBps) 00:07:55.222 00:07:55.223 07:54:00 -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:55.223 07:54:00 -- dd/basic_rw.sh@37 -- # gen_conf 00:07:55.223 07:54:00 -- dd/common.sh@31 -- # xtrace_disable 00:07:55.223 07:54:00 -- common/autotest_common.sh@10 -- # set +x 00:07:55.223 [2024-07-13 07:54:00.886006] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:55.223 [2024-07-13 07:54:00.886101] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68980 ] 00:07:55.223 { 00:07:55.223 "subsystems": [ 00:07:55.223 { 00:07:55.223 "subsystem": "bdev", 00:07:55.223 "config": [ 00:07:55.223 { 00:07:55.223 "params": { 00:07:55.223 "trtype": "pcie", 00:07:55.223 "traddr": "0000:00:06.0", 00:07:55.223 "name": "Nvme0" 00:07:55.223 }, 00:07:55.223 "method": "bdev_nvme_attach_controller" 00:07:55.223 }, 00:07:55.223 { 00:07:55.223 "method": "bdev_wait_for_examine" 00:07:55.223 } 00:07:55.223 ] 00:07:55.223 } 00:07:55.223 ] 00:07:55.223 } 00:07:55.223 [2024-07-13 07:54:01.022427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.482 [2024-07-13 07:54:01.053417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.482  Copying: 48/48 [kB] (average 46 MBps) 00:07:55.482 00:07:55.740 07:54:01 -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:55.740 07:54:01 -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:55.740 07:54:01 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:55.740 07:54:01 -- dd/common.sh@11 -- # local nvme_ref= 00:07:55.740 07:54:01 -- dd/common.sh@12 -- # local size=49152 00:07:55.740 07:54:01 -- dd/common.sh@14 -- # local bs=1048576 00:07:55.740 07:54:01 -- dd/common.sh@15 -- # local count=1 00:07:55.740 07:54:01 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:55.740 07:54:01 -- dd/common.sh@18 -- # gen_conf 00:07:55.740 07:54:01 -- dd/common.sh@31 -- # xtrace_disable 00:07:55.740 07:54:01 -- common/autotest_common.sh@10 -- # set +x 00:07:55.740 [2024-07-13 07:54:01.358084] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:55.740 [2024-07-13 07:54:01.358681] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68988 ] 00:07:55.740 { 00:07:55.740 "subsystems": [ 00:07:55.740 { 00:07:55.740 "subsystem": "bdev", 00:07:55.740 "config": [ 00:07:55.740 { 00:07:55.740 "params": { 00:07:55.740 "trtype": "pcie", 00:07:55.740 "traddr": "0000:00:06.0", 00:07:55.740 "name": "Nvme0" 00:07:55.740 }, 00:07:55.740 "method": "bdev_nvme_attach_controller" 00:07:55.740 }, 00:07:55.740 { 00:07:55.740 "method": "bdev_wait_for_examine" 00:07:55.740 } 00:07:55.740 ] 00:07:55.740 } 00:07:55.740 ] 00:07:55.740 } 00:07:55.740 [2024-07-13 07:54:01.498365] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.740 [2024-07-13 07:54:01.530696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.998  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:55.998 00:07:55.998 00:07:55.998 real 0m11.586s 00:07:55.998 user 0m8.452s 00:07:55.998 sys 0m2.045s 00:07:55.998 07:54:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:55.998 07:54:01 -- common/autotest_common.sh@10 -- # set +x 00:07:55.998 ************************************ 00:07:55.998 END TEST dd_rw 00:07:55.998 ************************************ 00:07:56.256 07:54:01 -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:56.257 07:54:01 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:56.257 07:54:01 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:56.257 07:54:01 -- common/autotest_common.sh@10 -- # set +x 00:07:56.257 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:56.257 ************************************ 00:07:56.257 START TEST dd_rw_offset 00:07:56.257 ************************************ 00:07:56.257 07:54:01 -- common/autotest_common.sh@1104 -- # basic_offset 00:07:56.257 07:54:01 -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:56.257 07:54:01 -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:56.257 07:54:01 -- dd/common.sh@98 -- # xtrace_disable 00:07:56.257 07:54:01 -- common/autotest_common.sh@10 -- # set +x 00:07:56.257 07:54:01 -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:56.257 07:54:01 -- dd/basic_rw.sh@56 -- # data=qu06g3t6nfurpvgmwheqzen967kyamezl9g82v1vmwvfyrahff1bxw8w6hscafqt7zp46ptno60yr69uxauzq9liddtjy2unyzir98iac6nlbxpowbyzbpinkoxjig7mxb98q66r3e45ogzc4lgajhovsvzee7uyssxbaps6t6imfwvqqhzpcst4eu796gfatx92nprk0kagu62452rswymllsfgu9g4w2uq6cvp61gtguwpd73vovewxep4w0urkkn27ml1prygo95ukeettjrbzodvd5jihi0713ghgwwmrnqjxdcxhckal979pvjo7uz7g2vfo7kdffx42lfv8qch6uoat6sh1grkz3kttwaswz057j8hgmi11vh2z99jbf3n50lnnyiqae2st7a6zdsohrlfl3ekcduffpdsnydmzebk2avkm0nlf02q2jd3v5e7ppqx1zi0xtdiiyj6drkxlh65qu0yd11fsbjmuiigw4r4zl4fz0dk6c64ggfkeenunpmj5at063l6767iwuvn5jcbvc5k5z72feaj0u0r0eb06xzp059gaoqnyybc68zh211ukn76gl8y5ay0zlououm8vunqb9b4nkmxp6xsodwv2spyvhq5hkumh3ubpxl16svrkeg0df031juyg1mdqq2vxwa0118h0lfjygrxfyatnk0vqs315fdpbgh4oda7uuxmzdl589pf6ejagxj0d5dl2gy5prz18nkoadidgrkc36toeoj9zsa905cdmjrccp1m8p514giyzv3k8y0ltyc9475eonpq9m41icqjaj7jigebqb6bkhy0mrg5z40vup0cwv59bsx065cu27fufryqvtacrd592jfv4kj62pizeji6dca6mdr62uln25cqy8gg46i3mo1yc4k9p0uj0wi7q79hzgj5d2v0xj5q9m7ozfe11c24pqrqbswf9n7fv4v3zcqaijrstmp2yubljx2ts0t602i82t9co09fhkjtrl0n56ccc3rtzs38rpdpfmikd5ef8uakswp26gmi0ldsd1htdhq8i57fq9e4t9wwwvowi8pboi6q8wvxzq6ikizajcxpvfqkqbefwiin6vutgaebup09f78xptmlqpxahcfexwxkqylt329ncxn2cjao8d0bhw0ka9c8t5rbuw9oykb5cq7sr9qo3e7u1iooz3jm26ywrbpcxcoaxluvnmdj9ki2jw9oz54qljm2tjj6rmuptmpl85qbuyf6a89on3ggxm5xmwurzhm81t3988e8gc2km2pclljj4xtnjohiicbjrwz18l4fsl1bnpfz2j2m1mk4c4u5v00i54yspjfvcaaj6phg5yryb1oldmp64wqh09bed0xw5k0kebc9uq49qtkbdqvbwzvggpnlzegx5w4rbvecbniq17a479gpxjoid5q99jedfjmu4tzfcw7g1voms4eylk1ds62dspqopntpnl2nj7mgfrdx6a0zzn4p5ari61r1vx0wmvuu7iq7jsxhwlnnd95ifvbv3hs8x8z6cqgzk5s1iwk1zbh8w3ca91ftyppmv4xca4p3fxg0b3m2u1oioe5ysxp0v39io34kif9ng8q0y9b7ilhvfvqsgdem2r0cexfmyrdxcvqkfq6ud6bov9pcvu3ny8zvvy11yr4baodr2eqruc5oeiabmnp0ygmlgth85gvy15iyk3f4gclaonrx4qifcrwg39ndd7uw80d6ab0js7bspx61h538m01eqqb4ndlq6h9qcjutsccw5i8l030mfu3tre5euaexfu99i5r5tqh82otcs95qpewkcwzi83rwufjz00lrgi69gzlxaxq9ab4q4yd32vzch7wnfqzbnu6aht8g4p6hl5sqmttli9mdr2e0nnmwum22yxyt0us4eb1qgo4t2uvs9o6mvvx4o39wj0kteztt4iktjrry6onb1yezm17qddpxfthmb2kzi0fwxksti0ye0gi05ljkylmieg5z4jb8g0ehzq7uwx84dr6s9jew71bxuhhc14qm1kypz5afz3vvy5ipi1t7twiipvu6xoxazfrn5b5kvt9xj2n8wki9fxnkqxclsrcl9tzhcp4vmwnf5rqyrxxcdqhct3t0fdi4niqgm09aco3wyw9uxti8z8a7tci025npllsw4mggghsyad9svtdoq22efmgp271v10p6cbwfs26maf13ov8jcyxpp2kqa3v0umy6zudxa3q7etjd4gurg26j3sterf2ui72v7tmnsdx1ntcsqyofbqd19qf2wq9nylljr83dsc1yjwa5mz6lgugr0gkx8c798tes51wn9hgsgirz7jxhip94volvje71j4cnevoswnvhsm3vgnlol90z9xokmrpplzzvtwmp5jseaet7k71qg06c9dtq774i1uuayvd61vw0adbmml8p10nmrhthtwcf5a1b1t7ccsogn3d89x6knm6qfligz3eu4ciz63e0vat26bk4vgl8tdh7qrur3jbwlikw2fxikrmyp3tw2dnfq1mlp3uxus0ard26uo8ne0eapfwuaphrr1a8g5akybgnt38a6ilirg8f9s3amqcqn22uzafxq81in2oqyuxooj4zzymqpxmc7h12w34pweg97cbns1vn4kgamv9ab2l39s270u1l4el81g5oih6hepqmyz4xkcx3624rhsnvgqgjhft98q4ua0s4p4bvnq6d92nu8wyo9sxa0fryfs2dvmbdn1h8w7lsgo43ozbal7wcyul02udil6zylwlwqr0xeb7nwmwhboxcu2glehumocwm4373e4r2erv1b1vfw6rcss2m0fcghfztumqx2h7jyppb0pz973xcdcs1jmzj4ciq2jdn1fn6ey48tyins0b2yzu5ovqhrw7wj73rsnbuh1a3yoqz91c30zvmgropyha6r123gf61hksb382ksmi3bhg7ty424a63ssvwc8za75prbd9exjzkfxs6ea26jggi3w0n27x8i7cm77rlhhydian5voowajs1xgrrnaw49a9zr8ffflld5z78ke0mxqoacnsmg2gdcd5fllxxuceasjxffggrs39jgkyfnse5dijjqa0nr49n7zw9uieaoxfadv8wm3hbqtzog5kh8mpvwn9vkhgrlx6zbrm8no08odsgh1e7oeqvlxvmxmwm6anc5x0na7ao6do6holuy085oi3zs547eja4xa5vlxo3qopidfbgs6rqrer0r01u3gtzm5rpqjf9jqkiaimnvtrj3rjrk0ppprajzin92sr3y2kokwlykyzut177hpopicx7z250zuzwlfk8k7hvnrus3slzoegr7yy2b9mx4ol1q8gf5vgnlz9orp8yikuioa2f2w2ga6gkwkqhhonfno69cpc1xzo8z38epg06m07ej9tz7k2mj5czwh5qr0rzel09w3ww8z31yiejhz06lzw8dinoms2kvekzwf6te1c1vu8z5e1mp6xwhqxv7t7yhylobq4ktmookmlkhysphoe3v94qdfser6a82iqsxlxflhpajutkic5211ccsiwqzbuvosvdmz7uiskivcsynqh8gltnq7miw7bt2n0q7q1a8ii33u2qpeb9x2tdhhkg0e3fi5fsdu75napd3co6amxt7sf5bsrfkboh4swa2cbbx1ap0s81z9ext6y0ns0qfbo6192k9o7jp7dxn3gqvo48qjr2317o1b7azftiob9ccpuhqy5dro5xlag3ucbqumauhif1nz9c5yh8awcg7jkjdz8hk91w79w0a3vfm14g2vppr39ot7pj8ff7o6fr83b1qkb7il4zz3r09lwrfgoj2w4tpzwwvw4o7ph4juxfpvmeafl43adkoa0ypmgfnptp9ds5otq7d91dr199e11pprww0y2egcx7m4wjurro8q7tcqk8pbepnwud6m9uxd3yytwpvaq56iw9xt49y2ishiv9uqwp770anj93utgzrnmrpbd4f48dqavg6hdj82ued7b0gukksvjfwkgvpx5lhn52nwd42733fio2jvv90nsrv8p9 00:07:56.257 07:54:01 -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:56.257 07:54:01 -- dd/basic_rw.sh@59 -- # gen_conf 00:07:56.257 07:54:01 -- dd/common.sh@31 -- # xtrace_disable 00:07:56.257 07:54:01 -- common/autotest_common.sh@10 -- # set +x 00:07:56.257 [2024-07-13 07:54:01.936654] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:56.257 [2024-07-13 07:54:01.936748] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69017 ] 00:07:56.257 { 00:07:56.257 "subsystems": [ 00:07:56.257 { 00:07:56.257 "subsystem": "bdev", 00:07:56.257 "config": [ 00:07:56.257 { 00:07:56.257 "params": { 00:07:56.257 "trtype": "pcie", 00:07:56.257 "traddr": "0000:00:06.0", 00:07:56.257 "name": "Nvme0" 00:07:56.257 }, 00:07:56.257 "method": "bdev_nvme_attach_controller" 00:07:56.257 }, 00:07:56.257 { 00:07:56.257 "method": "bdev_wait_for_examine" 00:07:56.257 } 00:07:56.257 ] 00:07:56.257 } 00:07:56.257 ] 00:07:56.257 } 00:07:56.515 [2024-07-13 07:54:02.076643] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.515 [2024-07-13 07:54:02.114996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.774  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:56.774 00:07:56.774 07:54:02 -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:56.774 07:54:02 -- dd/basic_rw.sh@65 -- # gen_conf 00:07:56.774 07:54:02 -- dd/common.sh@31 -- # xtrace_disable 00:07:56.774 07:54:02 -- common/autotest_common.sh@10 -- # set +x 00:07:56.774 [2024-07-13 07:54:02.418091] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:56.774 [2024-07-13 07:54:02.418181] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69024 ] 00:07:56.774 { 00:07:56.774 "subsystems": [ 00:07:56.774 { 00:07:56.774 "subsystem": "bdev", 00:07:56.774 "config": [ 00:07:56.774 { 00:07:56.774 "params": { 00:07:56.774 "trtype": "pcie", 00:07:56.774 "traddr": "0000:00:06.0", 00:07:56.774 "name": "Nvme0" 00:07:56.774 }, 00:07:56.774 "method": "bdev_nvme_attach_controller" 00:07:56.774 }, 00:07:56.774 { 00:07:56.774 "method": "bdev_wait_for_examine" 00:07:56.774 } 00:07:56.774 ] 00:07:56.774 } 00:07:56.774 ] 00:07:56.774 } 00:07:56.774 [2024-07-13 07:54:02.555083] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.774 [2024-07-13 07:54:02.584523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.031  Copying: 4096/4096 [B] (average 4000 kBps) 00:07:57.031 00:07:57.290 07:54:02 -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:07:57.291 07:54:02 -- dd/basic_rw.sh@72 -- # [[ qu06g3t6nfurpvgmwheqzen967kyamezl9g82v1vmwvfyrahff1bxw8w6hscafqt7zp46ptno60yr69uxauzq9liddtjy2unyzir98iac6nlbxpowbyzbpinkoxjig7mxb98q66r3e45ogzc4lgajhovsvzee7uyssxbaps6t6imfwvqqhzpcst4eu796gfatx92nprk0kagu62452rswymllsfgu9g4w2uq6cvp61gtguwpd73vovewxep4w0urkkn27ml1prygo95ukeettjrbzodvd5jihi0713ghgwwmrnqjxdcxhckal979pvjo7uz7g2vfo7kdffx42lfv8qch6uoat6sh1grkz3kttwaswz057j8hgmi11vh2z99jbf3n50lnnyiqae2st7a6zdsohrlfl3ekcduffpdsnydmzebk2avkm0nlf02q2jd3v5e7ppqx1zi0xtdiiyj6drkxlh65qu0yd11fsbjmuiigw4r4zl4fz0dk6c64ggfkeenunpmj5at063l6767iwuvn5jcbvc5k5z72feaj0u0r0eb06xzp059gaoqnyybc68zh211ukn76gl8y5ay0zlououm8vunqb9b4nkmxp6xsodwv2spyvhq5hkumh3ubpxl16svrkeg0df031juyg1mdqq2vxwa0118h0lfjygrxfyatnk0vqs315fdpbgh4oda7uuxmzdl589pf6ejagxj0d5dl2gy5prz18nkoadidgrkc36toeoj9zsa905cdmjrccp1m8p514giyzv3k8y0ltyc9475eonpq9m41icqjaj7jigebqb6bkhy0mrg5z40vup0cwv59bsx065cu27fufryqvtacrd592jfv4kj62pizeji6dca6mdr62uln25cqy8gg46i3mo1yc4k9p0uj0wi7q79hzgj5d2v0xj5q9m7ozfe11c24pqrqbswf9n7fv4v3zcqaijrstmp2yubljx2ts0t602i82t9co09fhkjtrl0n56ccc3rtzs38rpdpfmikd5ef8uakswp26gmi0ldsd1htdhq8i57fq9e4t9wwwvowi8pboi6q8wvxzq6ikizajcxpvfqkqbefwiin6vutgaebup09f78xptmlqpxahcfexwxkqylt329ncxn2cjao8d0bhw0ka9c8t5rbuw9oykb5cq7sr9qo3e7u1iooz3jm26ywrbpcxcoaxluvnmdj9ki2jw9oz54qljm2tjj6rmuptmpl85qbuyf6a89on3ggxm5xmwurzhm81t3988e8gc2km2pclljj4xtnjohiicbjrwz18l4fsl1bnpfz2j2m1mk4c4u5v00i54yspjfvcaaj6phg5yryb1oldmp64wqh09bed0xw5k0kebc9uq49qtkbdqvbwzvggpnlzegx5w4rbvecbniq17a479gpxjoid5q99jedfjmu4tzfcw7g1voms4eylk1ds62dspqopntpnl2nj7mgfrdx6a0zzn4p5ari61r1vx0wmvuu7iq7jsxhwlnnd95ifvbv3hs8x8z6cqgzk5s1iwk1zbh8w3ca91ftyppmv4xca4p3fxg0b3m2u1oioe5ysxp0v39io34kif9ng8q0y9b7ilhvfvqsgdem2r0cexfmyrdxcvqkfq6ud6bov9pcvu3ny8zvvy11yr4baodr2eqruc5oeiabmnp0ygmlgth85gvy15iyk3f4gclaonrx4qifcrwg39ndd7uw80d6ab0js7bspx61h538m01eqqb4ndlq6h9qcjutsccw5i8l030mfu3tre5euaexfu99i5r5tqh82otcs95qpewkcwzi83rwufjz00lrgi69gzlxaxq9ab4q4yd32vzch7wnfqzbnu6aht8g4p6hl5sqmttli9mdr2e0nnmwum22yxyt0us4eb1qgo4t2uvs9o6mvvx4o39wj0kteztt4iktjrry6onb1yezm17qddpxfthmb2kzi0fwxksti0ye0gi05ljkylmieg5z4jb8g0ehzq7uwx84dr6s9jew71bxuhhc14qm1kypz5afz3vvy5ipi1t7twiipvu6xoxazfrn5b5kvt9xj2n8wki9fxnkqxclsrcl9tzhcp4vmwnf5rqyrxxcdqhct3t0fdi4niqgm09aco3wyw9uxti8z8a7tci025npllsw4mggghsyad9svtdoq22efmgp271v10p6cbwfs26maf13ov8jcyxpp2kqa3v0umy6zudxa3q7etjd4gurg26j3sterf2ui72v7tmnsdx1ntcsqyofbqd19qf2wq9nylljr83dsc1yjwa5mz6lgugr0gkx8c798tes51wn9hgsgirz7jxhip94volvje71j4cnevoswnvhsm3vgnlol90z9xokmrpplzzvtwmp5jseaet7k71qg06c9dtq774i1uuayvd61vw0adbmml8p10nmrhthtwcf5a1b1t7ccsogn3d89x6knm6qfligz3eu4ciz63e0vat26bk4vgl8tdh7qrur3jbwlikw2fxikrmyp3tw2dnfq1mlp3uxus0ard26uo8ne0eapfwuaphrr1a8g5akybgnt38a6ilirg8f9s3amqcqn22uzafxq81in2oqyuxooj4zzymqpxmc7h12w34pweg97cbns1vn4kgamv9ab2l39s270u1l4el81g5oih6hepqmyz4xkcx3624rhsnvgqgjhft98q4ua0s4p4bvnq6d92nu8wyo9sxa0fryfs2dvmbdn1h8w7lsgo43ozbal7wcyul02udil6zylwlwqr0xeb7nwmwhboxcu2glehumocwm4373e4r2erv1b1vfw6rcss2m0fcghfztumqx2h7jyppb0pz973xcdcs1jmzj4ciq2jdn1fn6ey48tyins0b2yzu5ovqhrw7wj73rsnbuh1a3yoqz91c30zvmgropyha6r123gf61hksb382ksmi3bhg7ty424a63ssvwc8za75prbd9exjzkfxs6ea26jggi3w0n27x8i7cm77rlhhydian5voowajs1xgrrnaw49a9zr8ffflld5z78ke0mxqoacnsmg2gdcd5fllxxuceasjxffggrs39jgkyfnse5dijjqa0nr49n7zw9uieaoxfadv8wm3hbqtzog5kh8mpvwn9vkhgrlx6zbrm8no08odsgh1e7oeqvlxvmxmwm6anc5x0na7ao6do6holuy085oi3zs547eja4xa5vlxo3qopidfbgs6rqrer0r01u3gtzm5rpqjf9jqkiaimnvtrj3rjrk0ppprajzin92sr3y2kokwlykyzut177hpopicx7z250zuzwlfk8k7hvnrus3slzoegr7yy2b9mx4ol1q8gf5vgnlz9orp8yikuioa2f2w2ga6gkwkqhhonfno69cpc1xzo8z38epg06m07ej9tz7k2mj5czwh5qr0rzel09w3ww8z31yiejhz06lzw8dinoms2kvekzwf6te1c1vu8z5e1mp6xwhqxv7t7yhylobq4ktmookmlkhysphoe3v94qdfser6a82iqsxlxflhpajutkic5211ccsiwqzbuvosvdmz7uiskivcsynqh8gltnq7miw7bt2n0q7q1a8ii33u2qpeb9x2tdhhkg0e3fi5fsdu75napd3co6amxt7sf5bsrfkboh4swa2cbbx1ap0s81z9ext6y0ns0qfbo6192k9o7jp7dxn3gqvo48qjr2317o1b7azftiob9ccpuhqy5dro5xlag3ucbqumauhif1nz9c5yh8awcg7jkjdz8hk91w79w0a3vfm14g2vppr39ot7pj8ff7o6fr83b1qkb7il4zz3r09lwrfgoj2w4tpzwwvw4o7ph4juxfpvmeafl43adkoa0ypmgfnptp9ds5otq7d91dr199e11pprww0y2egcx7m4wjurro8q7tcqk8pbepnwud6m9uxd3yytwpvaq56iw9xt49y2ishiv9uqwp770anj93utgzrnmrpbd4f48dqavg6hdj82ued7b0gukksvjfwkgvpx5lhn52nwd42733fio2jvv90nsrv8p9 == \q\u\0\6\g\3\t\6\n\f\u\r\p\v\g\m\w\h\e\q\z\e\n\9\6\7\k\y\a\m\e\z\l\9\g\8\2\v\1\v\m\w\v\f\y\r\a\h\f\f\1\b\x\w\8\w\6\h\s\c\a\f\q\t\7\z\p\4\6\p\t\n\o\6\0\y\r\6\9\u\x\a\u\z\q\9\l\i\d\d\t\j\y\2\u\n\y\z\i\r\9\8\i\a\c\6\n\l\b\x\p\o\w\b\y\z\b\p\i\n\k\o\x\j\i\g\7\m\x\b\9\8\q\6\6\r\3\e\4\5\o\g\z\c\4\l\g\a\j\h\o\v\s\v\z\e\e\7\u\y\s\s\x\b\a\p\s\6\t\6\i\m\f\w\v\q\q\h\z\p\c\s\t\4\e\u\7\9\6\g\f\a\t\x\9\2\n\p\r\k\0\k\a\g\u\6\2\4\5\2\r\s\w\y\m\l\l\s\f\g\u\9\g\4\w\2\u\q\6\c\v\p\6\1\g\t\g\u\w\p\d\7\3\v\o\v\e\w\x\e\p\4\w\0\u\r\k\k\n\2\7\m\l\1\p\r\y\g\o\9\5\u\k\e\e\t\t\j\r\b\z\o\d\v\d\5\j\i\h\i\0\7\1\3\g\h\g\w\w\m\r\n\q\j\x\d\c\x\h\c\k\a\l\9\7\9\p\v\j\o\7\u\z\7\g\2\v\f\o\7\k\d\f\f\x\4\2\l\f\v\8\q\c\h\6\u\o\a\t\6\s\h\1\g\r\k\z\3\k\t\t\w\a\s\w\z\0\5\7\j\8\h\g\m\i\1\1\v\h\2\z\9\9\j\b\f\3\n\5\0\l\n\n\y\i\q\a\e\2\s\t\7\a\6\z\d\s\o\h\r\l\f\l\3\e\k\c\d\u\f\f\p\d\s\n\y\d\m\z\e\b\k\2\a\v\k\m\0\n\l\f\0\2\q\2\j\d\3\v\5\e\7\p\p\q\x\1\z\i\0\x\t\d\i\i\y\j\6\d\r\k\x\l\h\6\5\q\u\0\y\d\1\1\f\s\b\j\m\u\i\i\g\w\4\r\4\z\l\4\f\z\0\d\k\6\c\6\4\g\g\f\k\e\e\n\u\n\p\m\j\5\a\t\0\6\3\l\6\7\6\7\i\w\u\v\n\5\j\c\b\v\c\5\k\5\z\7\2\f\e\a\j\0\u\0\r\0\e\b\0\6\x\z\p\0\5\9\g\a\o\q\n\y\y\b\c\6\8\z\h\2\1\1\u\k\n\7\6\g\l\8\y\5\a\y\0\z\l\o\u\o\u\m\8\v\u\n\q\b\9\b\4\n\k\m\x\p\6\x\s\o\d\w\v\2\s\p\y\v\h\q\5\h\k\u\m\h\3\u\b\p\x\l\1\6\s\v\r\k\e\g\0\d\f\0\3\1\j\u\y\g\1\m\d\q\q\2\v\x\w\a\0\1\1\8\h\0\l\f\j\y\g\r\x\f\y\a\t\n\k\0\v\q\s\3\1\5\f\d\p\b\g\h\4\o\d\a\7\u\u\x\m\z\d\l\5\8\9\p\f\6\e\j\a\g\x\j\0\d\5\d\l\2\g\y\5\p\r\z\1\8\n\k\o\a\d\i\d\g\r\k\c\3\6\t\o\e\o\j\9\z\s\a\9\0\5\c\d\m\j\r\c\c\p\1\m\8\p\5\1\4\g\i\y\z\v\3\k\8\y\0\l\t\y\c\9\4\7\5\e\o\n\p\q\9\m\4\1\i\c\q\j\a\j\7\j\i\g\e\b\q\b\6\b\k\h\y\0\m\r\g\5\z\4\0\v\u\p\0\c\w\v\5\9\b\s\x\0\6\5\c\u\2\7\f\u\f\r\y\q\v\t\a\c\r\d\5\9\2\j\f\v\4\k\j\6\2\p\i\z\e\j\i\6\d\c\a\6\m\d\r\6\2\u\l\n\2\5\c\q\y\8\g\g\4\6\i\3\m\o\1\y\c\4\k\9\p\0\u\j\0\w\i\7\q\7\9\h\z\g\j\5\d\2\v\0\x\j\5\q\9\m\7\o\z\f\e\1\1\c\2\4\p\q\r\q\b\s\w\f\9\n\7\f\v\4\v\3\z\c\q\a\i\j\r\s\t\m\p\2\y\u\b\l\j\x\2\t\s\0\t\6\0\2\i\8\2\t\9\c\o\0\9\f\h\k\j\t\r\l\0\n\5\6\c\c\c\3\r\t\z\s\3\8\r\p\d\p\f\m\i\k\d\5\e\f\8\u\a\k\s\w\p\2\6\g\m\i\0\l\d\s\d\1\h\t\d\h\q\8\i\5\7\f\q\9\e\4\t\9\w\w\w\v\o\w\i\8\p\b\o\i\6\q\8\w\v\x\z\q\6\i\k\i\z\a\j\c\x\p\v\f\q\k\q\b\e\f\w\i\i\n\6\v\u\t\g\a\e\b\u\p\0\9\f\7\8\x\p\t\m\l\q\p\x\a\h\c\f\e\x\w\x\k\q\y\l\t\3\2\9\n\c\x\n\2\c\j\a\o\8\d\0\b\h\w\0\k\a\9\c\8\t\5\r\b\u\w\9\o\y\k\b\5\c\q\7\s\r\9\q\o\3\e\7\u\1\i\o\o\z\3\j\m\2\6\y\w\r\b\p\c\x\c\o\a\x\l\u\v\n\m\d\j\9\k\i\2\j\w\9\o\z\5\4\q\l\j\m\2\t\j\j\6\r\m\u\p\t\m\p\l\8\5\q\b\u\y\f\6\a\8\9\o\n\3\g\g\x\m\5\x\m\w\u\r\z\h\m\8\1\t\3\9\8\8\e\8\g\c\2\k\m\2\p\c\l\l\j\j\4\x\t\n\j\o\h\i\i\c\b\j\r\w\z\1\8\l\4\f\s\l\1\b\n\p\f\z\2\j\2\m\1\m\k\4\c\4\u\5\v\0\0\i\5\4\y\s\p\j\f\v\c\a\a\j\6\p\h\g\5\y\r\y\b\1\o\l\d\m\p\6\4\w\q\h\0\9\b\e\d\0\x\w\5\k\0\k\e\b\c\9\u\q\4\9\q\t\k\b\d\q\v\b\w\z\v\g\g\p\n\l\z\e\g\x\5\w\4\r\b\v\e\c\b\n\i\q\1\7\a\4\7\9\g\p\x\j\o\i\d\5\q\9\9\j\e\d\f\j\m\u\4\t\z\f\c\w\7\g\1\v\o\m\s\4\e\y\l\k\1\d\s\6\2\d\s\p\q\o\p\n\t\p\n\l\2\n\j\7\m\g\f\r\d\x\6\a\0\z\z\n\4\p\5\a\r\i\6\1\r\1\v\x\0\w\m\v\u\u\7\i\q\7\j\s\x\h\w\l\n\n\d\9\5\i\f\v\b\v\3\h\s\8\x\8\z\6\c\q\g\z\k\5\s\1\i\w\k\1\z\b\h\8\w\3\c\a\9\1\f\t\y\p\p\m\v\4\x\c\a\4\p\3\f\x\g\0\b\3\m\2\u\1\o\i\o\e\5\y\s\x\p\0\v\3\9\i\o\3\4\k\i\f\9\n\g\8\q\0\y\9\b\7\i\l\h\v\f\v\q\s\g\d\e\m\2\r\0\c\e\x\f\m\y\r\d\x\c\v\q\k\f\q\6\u\d\6\b\o\v\9\p\c\v\u\3\n\y\8\z\v\v\y\1\1\y\r\4\b\a\o\d\r\2\e\q\r\u\c\5\o\e\i\a\b\m\n\p\0\y\g\m\l\g\t\h\8\5\g\v\y\1\5\i\y\k\3\f\4\g\c\l\a\o\n\r\x\4\q\i\f\c\r\w\g\3\9\n\d\d\7\u\w\8\0\d\6\a\b\0\j\s\7\b\s\p\x\6\1\h\5\3\8\m\0\1\e\q\q\b\4\n\d\l\q\6\h\9\q\c\j\u\t\s\c\c\w\5\i\8\l\0\3\0\m\f\u\3\t\r\e\5\e\u\a\e\x\f\u\9\9\i\5\r\5\t\q\h\8\2\o\t\c\s\9\5\q\p\e\w\k\c\w\z\i\8\3\r\w\u\f\j\z\0\0\l\r\g\i\6\9\g\z\l\x\a\x\q\9\a\b\4\q\4\y\d\3\2\v\z\c\h\7\w\n\f\q\z\b\n\u\6\a\h\t\8\g\4\p\6\h\l\5\s\q\m\t\t\l\i\9\m\d\r\2\e\0\n\n\m\w\u\m\2\2\y\x\y\t\0\u\s\4\e\b\1\q\g\o\4\t\2\u\v\s\9\o\6\m\v\v\x\4\o\3\9\w\j\0\k\t\e\z\t\t\4\i\k\t\j\r\r\y\6\o\n\b\1\y\e\z\m\1\7\q\d\d\p\x\f\t\h\m\b\2\k\z\i\0\f\w\x\k\s\t\i\0\y\e\0\g\i\0\5\l\j\k\y\l\m\i\e\g\5\z\4\j\b\8\g\0\e\h\z\q\7\u\w\x\8\4\d\r\6\s\9\j\e\w\7\1\b\x\u\h\h\c\1\4\q\m\1\k\y\p\z\5\a\f\z\3\v\v\y\5\i\p\i\1\t\7\t\w\i\i\p\v\u\6\x\o\x\a\z\f\r\n\5\b\5\k\v\t\9\x\j\2\n\8\w\k\i\9\f\x\n\k\q\x\c\l\s\r\c\l\9\t\z\h\c\p\4\v\m\w\n\f\5\r\q\y\r\x\x\c\d\q\h\c\t\3\t\0\f\d\i\4\n\i\q\g\m\0\9\a\c\o\3\w\y\w\9\u\x\t\i\8\z\8\a\7\t\c\i\0\2\5\n\p\l\l\s\w\4\m\g\g\g\h\s\y\a\d\9\s\v\t\d\o\q\2\2\e\f\m\g\p\2\7\1\v\1\0\p\6\c\b\w\f\s\2\6\m\a\f\1\3\o\v\8\j\c\y\x\p\p\2\k\q\a\3\v\0\u\m\y\6\z\u\d\x\a\3\q\7\e\t\j\d\4\g\u\r\g\2\6\j\3\s\t\e\r\f\2\u\i\7\2\v\7\t\m\n\s\d\x\1\n\t\c\s\q\y\o\f\b\q\d\1\9\q\f\2\w\q\9\n\y\l\l\j\r\8\3\d\s\c\1\y\j\w\a\5\m\z\6\l\g\u\g\r\0\g\k\x\8\c\7\9\8\t\e\s\5\1\w\n\9\h\g\s\g\i\r\z\7\j\x\h\i\p\9\4\v\o\l\v\j\e\7\1\j\4\c\n\e\v\o\s\w\n\v\h\s\m\3\v\g\n\l\o\l\9\0\z\9\x\o\k\m\r\p\p\l\z\z\v\t\w\m\p\5\j\s\e\a\e\t\7\k\7\1\q\g\0\6\c\9\d\t\q\7\7\4\i\1\u\u\a\y\v\d\6\1\v\w\0\a\d\b\m\m\l\8\p\1\0\n\m\r\h\t\h\t\w\c\f\5\a\1\b\1\t\7\c\c\s\o\g\n\3\d\8\9\x\6\k\n\m\6\q\f\l\i\g\z\3\e\u\4\c\i\z\6\3\e\0\v\a\t\2\6\b\k\4\v\g\l\8\t\d\h\7\q\r\u\r\3\j\b\w\l\i\k\w\2\f\x\i\k\r\m\y\p\3\t\w\2\d\n\f\q\1\m\l\p\3\u\x\u\s\0\a\r\d\2\6\u\o\8\n\e\0\e\a\p\f\w\u\a\p\h\r\r\1\a\8\g\5\a\k\y\b\g\n\t\3\8\a\6\i\l\i\r\g\8\f\9\s\3\a\m\q\c\q\n\2\2\u\z\a\f\x\q\8\1\i\n\2\o\q\y\u\x\o\o\j\4\z\z\y\m\q\p\x\m\c\7\h\1\2\w\3\4\p\w\e\g\9\7\c\b\n\s\1\v\n\4\k\g\a\m\v\9\a\b\2\l\3\9\s\2\7\0\u\1\l\4\e\l\8\1\g\5\o\i\h\6\h\e\p\q\m\y\z\4\x\k\c\x\3\6\2\4\r\h\s\n\v\g\q\g\j\h\f\t\9\8\q\4\u\a\0\s\4\p\4\b\v\n\q\6\d\9\2\n\u\8\w\y\o\9\s\x\a\0\f\r\y\f\s\2\d\v\m\b\d\n\1\h\8\w\7\l\s\g\o\4\3\o\z\b\a\l\7\w\c\y\u\l\0\2\u\d\i\l\6\z\y\l\w\l\w\q\r\0\x\e\b\7\n\w\m\w\h\b\o\x\c\u\2\g\l\e\h\u\m\o\c\w\m\4\3\7\3\e\4\r\2\e\r\v\1\b\1\v\f\w\6\r\c\s\s\2\m\0\f\c\g\h\f\z\t\u\m\q\x\2\h\7\j\y\p\p\b\0\p\z\9\7\3\x\c\d\c\s\1\j\m\z\j\4\c\i\q\2\j\d\n\1\f\n\6\e\y\4\8\t\y\i\n\s\0\b\2\y\z\u\5\o\v\q\h\r\w\7\w\j\7\3\r\s\n\b\u\h\1\a\3\y\o\q\z\9\1\c\3\0\z\v\m\g\r\o\p\y\h\a\6\r\1\2\3\g\f\6\1\h\k\s\b\3\8\2\k\s\m\i\3\b\h\g\7\t\y\4\2\4\a\6\3\s\s\v\w\c\8\z\a\7\5\p\r\b\d\9\e\x\j\z\k\f\x\s\6\e\a\2\6\j\g\g\i\3\w\0\n\2\7\x\8\i\7\c\m\7\7\r\l\h\h\y\d\i\a\n\5\v\o\o\w\a\j\s\1\x\g\r\r\n\a\w\4\9\a\9\z\r\8\f\f\f\l\l\d\5\z\7\8\k\e\0\m\x\q\o\a\c\n\s\m\g\2\g\d\c\d\5\f\l\l\x\x\u\c\e\a\s\j\x\f\f\g\g\r\s\3\9\j\g\k\y\f\n\s\e\5\d\i\j\j\q\a\0\n\r\4\9\n\7\z\w\9\u\i\e\a\o\x\f\a\d\v\8\w\m\3\h\b\q\t\z\o\g\5\k\h\8\m\p\v\w\n\9\v\k\h\g\r\l\x\6\z\b\r\m\8\n\o\0\8\o\d\s\g\h\1\e\7\o\e\q\v\l\x\v\m\x\m\w\m\6\a\n\c\5\x\0\n\a\7\a\o\6\d\o\6\h\o\l\u\y\0\8\5\o\i\3\z\s\5\4\7\e\j\a\4\x\a\5\v\l\x\o\3\q\o\p\i\d\f\b\g\s\6\r\q\r\e\r\0\r\0\1\u\3\g\t\z\m\5\r\p\q\j\f\9\j\q\k\i\a\i\m\n\v\t\r\j\3\r\j\r\k\0\p\p\p\r\a\j\z\i\n\9\2\s\r\3\y\2\k\o\k\w\l\y\k\y\z\u\t\1\7\7\h\p\o\p\i\c\x\7\z\2\5\0\z\u\z\w\l\f\k\8\k\7\h\v\n\r\u\s\3\s\l\z\o\e\g\r\7\y\y\2\b\9\m\x\4\o\l\1\q\8\g\f\5\v\g\n\l\z\9\o\r\p\8\y\i\k\u\i\o\a\2\f\2\w\2\g\a\6\g\k\w\k\q\h\h\o\n\f\n\o\6\9\c\p\c\1\x\z\o\8\z\3\8\e\p\g\0\6\m\0\7\e\j\9\t\z\7\k\2\m\j\5\c\z\w\h\5\q\r\0\r\z\e\l\0\9\w\3\w\w\8\z\3\1\y\i\e\j\h\z\0\6\l\z\w\8\d\i\n\o\m\s\2\k\v\e\k\z\w\f\6\t\e\1\c\1\v\u\8\z\5\e\1\m\p\6\x\w\h\q\x\v\7\t\7\y\h\y\l\o\b\q\4\k\t\m\o\o\k\m\l\k\h\y\s\p\h\o\e\3\v\9\4\q\d\f\s\e\r\6\a\8\2\i\q\s\x\l\x\f\l\h\p\a\j\u\t\k\i\c\5\2\1\1\c\c\s\i\w\q\z\b\u\v\o\s\v\d\m\z\7\u\i\s\k\i\v\c\s\y\n\q\h\8\g\l\t\n\q\7\m\i\w\7\b\t\2\n\0\q\7\q\1\a\8\i\i\3\3\u\2\q\p\e\b\9\x\2\t\d\h\h\k\g\0\e\3\f\i\5\f\s\d\u\7\5\n\a\p\d\3\c\o\6\a\m\x\t\7\s\f\5\b\s\r\f\k\b\o\h\4\s\w\a\2\c\b\b\x\1\a\p\0\s\8\1\z\9\e\x\t\6\y\0\n\s\0\q\f\b\o\6\1\9\2\k\9\o\7\j\p\7\d\x\n\3\g\q\v\o\4\8\q\j\r\2\3\1\7\o\1\b\7\a\z\f\t\i\o\b\9\c\c\p\u\h\q\y\5\d\r\o\5\x\l\a\g\3\u\c\b\q\u\m\a\u\h\i\f\1\n\z\9\c\5\y\h\8\a\w\c\g\7\j\k\j\d\z\8\h\k\9\1\w\7\9\w\0\a\3\v\f\m\1\4\g\2\v\p\p\r\3\9\o\t\7\p\j\8\f\f\7\o\6\f\r\8\3\b\1\q\k\b\7\i\l\4\z\z\3\r\0\9\l\w\r\f\g\o\j\2\w\4\t\p\z\w\w\v\w\4\o\7\p\h\4\j\u\x\f\p\v\m\e\a\f\l\4\3\a\d\k\o\a\0\y\p\m\g\f\n\p\t\p\9\d\s\5\o\t\q\7\d\9\1\d\r\1\9\9\e\1\1\p\p\r\w\w\0\y\2\e\g\c\x\7\m\4\w\j\u\r\r\o\8\q\7\t\c\q\k\8\p\b\e\p\n\w\u\d\6\m\9\u\x\d\3\y\y\t\w\p\v\a\q\5\6\i\w\9\x\t\4\9\y\2\i\s\h\i\v\9\u\q\w\p\7\7\0\a\n\j\9\3\u\t\g\z\r\n\m\r\p\b\d\4\f\4\8\d\q\a\v\g\6\h\d\j\8\2\u\e\d\7\b\0\g\u\k\k\s\v\j\f\w\k\g\v\p\x\5\l\h\n\5\2\n\w\d\4\2\7\3\3\f\i\o\2\j\v\v\9\0\n\s\r\v\8\p\9 ]] 00:07:57.291 ************************************ 00:07:57.291 END TEST dd_rw_offset 00:07:57.291 ************************************ 00:07:57.291 00:07:57.291 real 0m1.010s 00:07:57.291 user 0m0.677s 00:07:57.291 sys 0m0.213s 00:07:57.291 07:54:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.291 07:54:02 -- common/autotest_common.sh@10 -- # set +x 00:07:57.291 07:54:02 -- dd/basic_rw.sh@1 -- # cleanup 00:07:57.291 07:54:02 -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:07:57.291 07:54:02 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:57.291 07:54:02 -- dd/common.sh@11 -- # local nvme_ref= 00:07:57.291 07:54:02 -- dd/common.sh@12 -- # local size=0xffff 00:07:57.291 07:54:02 -- dd/common.sh@14 -- # local bs=1048576 00:07:57.291 07:54:02 -- dd/common.sh@15 -- # local count=1 00:07:57.291 07:54:02 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:57.291 07:54:02 -- dd/common.sh@18 -- # gen_conf 00:07:57.291 07:54:02 -- dd/common.sh@31 -- # xtrace_disable 00:07:57.291 07:54:02 -- common/autotest_common.sh@10 -- # set +x 00:07:57.291 [2024-07-13 07:54:02.932448] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:57.291 [2024-07-13 07:54:02.932532] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69051 ] 00:07:57.291 { 00:07:57.291 "subsystems": [ 00:07:57.291 { 00:07:57.291 "subsystem": "bdev", 00:07:57.291 "config": [ 00:07:57.291 { 00:07:57.291 "params": { 00:07:57.291 "trtype": "pcie", 00:07:57.291 "traddr": "0000:00:06.0", 00:07:57.291 "name": "Nvme0" 00:07:57.291 }, 00:07:57.291 "method": "bdev_nvme_attach_controller" 00:07:57.291 }, 00:07:57.291 { 00:07:57.291 "method": "bdev_wait_for_examine" 00:07:57.291 } 00:07:57.291 ] 00:07:57.291 } 00:07:57.291 ] 00:07:57.291 } 00:07:57.291 [2024-07-13 07:54:03.070498] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.549 [2024-07-13 07:54:03.109968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.808  Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:57.808 00:07:57.808 07:54:03 -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.808 00:07:57.808 real 0m13.985s 00:07:57.808 user 0m9.932s 00:07:57.808 sys 0m2.634s 00:07:57.808 07:54:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:57.808 07:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:57.808 ************************************ 00:07:57.808 END TEST spdk_dd_basic_rw 00:07:57.808 ************************************ 00:07:57.808 07:54:03 -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:57.808 07:54:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:57.808 07:54:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.808 07:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:57.808 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:57.808 ************************************ 00:07:57.808 START TEST spdk_dd_posix 00:07:57.808 ************************************ 00:07:57.808 07:54:03 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:07:57.808 * Looking for test storage... 00:07:57.808 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:57.808 07:54:03 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:57.808 07:54:03 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.808 07:54:03 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.808 07:54:03 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.808 07:54:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.808 07:54:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.808 07:54:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.808 07:54:03 -- paths/export.sh@5 -- # export PATH 00:07:57.809 07:54:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.809 07:54:03 -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:07:57.809 07:54:03 -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:07:57.809 07:54:03 -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:07:57.809 07:54:03 -- dd/posix.sh@125 -- # trap cleanup EXIT 00:07:57.809 07:54:03 -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:57.809 07:54:03 -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:57.809 07:54:03 -- dd/posix.sh@130 -- # tests 00:07:57.809 07:54:03 -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:07:57.809 * First test run, liburing in use 00:07:57.809 07:54:03 -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:07:57.809 07:54:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:57.809 07:54:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:57.809 07:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:57.809 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:57.809 ************************************ 00:07:57.809 START TEST dd_flag_append 00:07:57.809 ************************************ 00:07:57.809 07:54:03 -- common/autotest_common.sh@1104 -- # append 00:07:57.809 07:54:03 -- dd/posix.sh@16 -- # local dump0 00:07:57.809 07:54:03 -- dd/posix.sh@17 -- # local dump1 00:07:57.809 07:54:03 -- dd/posix.sh@19 -- # gen_bytes 32 00:07:57.809 07:54:03 -- dd/common.sh@98 -- # xtrace_disable 00:07:57.809 07:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:57.809 07:54:03 -- dd/posix.sh@19 -- # dump0=8hfb7iex9dg85i5eeg39u1ttgbe9gr7l 00:07:57.809 07:54:03 -- dd/posix.sh@20 -- # gen_bytes 32 00:07:57.809 07:54:03 -- dd/common.sh@98 -- # xtrace_disable 00:07:57.809 07:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:57.809 07:54:03 -- dd/posix.sh@20 -- # dump1=yrkqd63lek4n4j4d7bbvzg2x2ucw54s3 00:07:57.809 07:54:03 -- dd/posix.sh@22 -- # printf %s 8hfb7iex9dg85i5eeg39u1ttgbe9gr7l 00:07:57.809 07:54:03 -- dd/posix.sh@23 -- # printf %s yrkqd63lek4n4j4d7bbvzg2x2ucw54s3 00:07:57.809 07:54:03 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:07:57.809 [2024-07-13 07:54:03.564065] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:57.809 [2024-07-13 07:54:03.564162] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69102 ] 00:07:58.067 [2024-07-13 07:54:03.703044] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.067 [2024-07-13 07:54:03.734261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.326  Copying: 32/32 [B] (average 31 kBps) 00:07:58.326 00:07:58.326 07:54:03 -- dd/posix.sh@27 -- # [[ yrkqd63lek4n4j4d7bbvzg2x2ucw54s38hfb7iex9dg85i5eeg39u1ttgbe9gr7l == \y\r\k\q\d\6\3\l\e\k\4\n\4\j\4\d\7\b\b\v\z\g\2\x\2\u\c\w\5\4\s\3\8\h\f\b\7\i\e\x\9\d\g\8\5\i\5\e\e\g\3\9\u\1\t\t\g\b\e\9\g\r\7\l ]] 00:07:58.326 00:07:58.326 real 0m0.411s 00:07:58.326 user 0m0.208s 00:07:58.326 sys 0m0.081s 00:07:58.326 07:54:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.326 07:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:58.326 ************************************ 00:07:58.326 END TEST dd_flag_append 00:07:58.326 ************************************ 00:07:58.326 07:54:03 -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:07:58.326 07:54:03 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:58.326 07:54:03 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:58.326 07:54:03 -- common/autotest_common.sh@10 -- # set +x 00:07:58.326 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:58.326 ************************************ 00:07:58.326 START TEST dd_flag_directory 00:07:58.326 ************************************ 00:07:58.326 07:54:03 -- common/autotest_common.sh@1104 -- # directory 00:07:58.326 07:54:03 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.326 07:54:03 -- common/autotest_common.sh@640 -- # local es=0 00:07:58.326 07:54:03 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.326 07:54:03 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.326 07:54:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.326 07:54:03 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.326 07:54:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.326 07:54:03 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.326 07:54:03 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.326 07:54:03 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.326 07:54:03 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.326 07:54:03 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:58.326 [2024-07-13 07:54:04.011794] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:58.326 [2024-07-13 07:54:04.011900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69128 ] 00:07:58.326 [2024-07-13 07:54:04.134302] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.584 [2024-07-13 07:54:04.167034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.584 [2024-07-13 07:54:04.207005] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.584 [2024-07-13 07:54:04.207075] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.584 [2024-07-13 07:54:04.207103] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.584 [2024-07-13 07:54:04.269365] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:58.584 07:54:04 -- common/autotest_common.sh@643 -- # es=236 00:07:58.584 07:54:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:58.584 07:54:04 -- common/autotest_common.sh@652 -- # es=108 00:07:58.584 07:54:04 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:58.584 07:54:04 -- common/autotest_common.sh@660 -- # es=1 00:07:58.584 07:54:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:58.584 07:54:04 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.584 07:54:04 -- common/autotest_common.sh@640 -- # local es=0 00:07:58.584 07:54:04 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.584 07:54:04 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.584 07:54:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.584 07:54:04 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.584 07:54:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.584 07:54:04 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.584 07:54:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:58.584 07:54:04 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:58.584 07:54:04 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:58.584 07:54:04 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:07:58.842 [2024-07-13 07:54:04.404817] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:58.842 [2024-07-13 07:54:04.404938] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69132 ] 00:07:58.842 [2024-07-13 07:54:04.540793] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.842 [2024-07-13 07:54:04.572875] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.842 [2024-07-13 07:54:04.612156] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.842 [2024-07-13 07:54:04.612224] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:07:58.842 [2024-07-13 07:54:04.612252] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.113 [2024-07-13 07:54:04.666543] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:59.113 07:54:04 -- common/autotest_common.sh@643 -- # es=236 00:07:59.113 07:54:04 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:59.113 07:54:04 -- common/autotest_common.sh@652 -- # es=108 00:07:59.113 07:54:04 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:59.113 07:54:04 -- common/autotest_common.sh@660 -- # es=1 00:07:59.113 07:54:04 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:59.113 00:07:59.113 real 0m0.751s 00:07:59.113 user 0m0.374s 00:07:59.113 sys 0m0.169s 00:07:59.113 07:54:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.113 07:54:04 -- common/autotest_common.sh@10 -- # set +x 00:07:59.113 ************************************ 00:07:59.113 END TEST dd_flag_directory 00:07:59.113 ************************************ 00:07:59.113 07:54:04 -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:07:59.113 07:54:04 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:07:59.113 07:54:04 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:07:59.113 07:54:04 -- common/autotest_common.sh@10 -- # set +x 00:07:59.113 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:07:59.113 ************************************ 00:07:59.113 START TEST dd_flag_nofollow 00:07:59.113 ************************************ 00:07:59.113 07:54:04 -- common/autotest_common.sh@1104 -- # nofollow 00:07:59.113 07:54:04 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:59.113 07:54:04 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:59.113 07:54:04 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:07:59.113 07:54:04 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:07:59.113 07:54:04 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.113 07:54:04 -- common/autotest_common.sh@640 -- # local es=0 00:07:59.113 07:54:04 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.113 07:54:04 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.113 07:54:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.113 07:54:04 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.113 07:54:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.113 07:54:04 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.113 07:54:04 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.113 07:54:04 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.113 07:54:04 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.113 07:54:04 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.113 [2024-07-13 07:54:04.826060] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:59.113 [2024-07-13 07:54:04.826196] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69155 ] 00:07:59.380 [2024-07-13 07:54:04.958032] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.380 [2024-07-13 07:54:04.996403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.380 [2024-07-13 07:54:05.044690] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:59.380 [2024-07-13 07:54:05.044754] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:07:59.380 [2024-07-13 07:54:05.044810] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.380 [2024-07-13 07:54:05.106891] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:59.380 07:54:05 -- common/autotest_common.sh@643 -- # es=216 00:07:59.380 07:54:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:59.380 07:54:05 -- common/autotest_common.sh@652 -- # es=88 00:07:59.380 07:54:05 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:59.380 07:54:05 -- common/autotest_common.sh@660 -- # es=1 00:07:59.380 07:54:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:59.380 07:54:05 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:59.380 07:54:05 -- common/autotest_common.sh@640 -- # local es=0 00:07:59.380 07:54:05 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:59.380 07:54:05 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.380 07:54:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.380 07:54:05 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.380 07:54:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.380 07:54:05 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.381 07:54:05 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:07:59.381 07:54:05 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:59.381 07:54:05 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:59.381 07:54:05 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:07:59.639 [2024-07-13 07:54:05.222002] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:59.639 [2024-07-13 07:54:05.222092] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69164 ] 00:07:59.639 [2024-07-13 07:54:05.359485] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.639 [2024-07-13 07:54:05.388994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.639 [2024-07-13 07:54:05.429703] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:59.639 [2024-07-13 07:54:05.429804] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:07:59.639 [2024-07-13 07:54:05.429836] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:59.897 [2024-07-13 07:54:05.486536] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:07:59.897 07:54:05 -- common/autotest_common.sh@643 -- # es=216 00:07:59.897 07:54:05 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:07:59.897 07:54:05 -- common/autotest_common.sh@652 -- # es=88 00:07:59.897 07:54:05 -- common/autotest_common.sh@653 -- # case "$es" in 00:07:59.897 07:54:05 -- common/autotest_common.sh@660 -- # es=1 00:07:59.897 07:54:05 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:07:59.897 07:54:05 -- dd/posix.sh@46 -- # gen_bytes 512 00:07:59.897 07:54:05 -- dd/common.sh@98 -- # xtrace_disable 00:07:59.897 07:54:05 -- common/autotest_common.sh@10 -- # set +x 00:07:59.897 07:54:05 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:59.897 [2024-07-13 07:54:05.613590] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:07:59.897 [2024-07-13 07:54:05.613721] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69166 ] 00:08:00.156 [2024-07-13 07:54:05.751017] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.156 [2024-07-13 07:54:05.781571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.156  Copying: 512/512 [B] (average 500 kBps) 00:08:00.156 00:08:00.157 07:54:05 -- dd/posix.sh@49 -- # [[ kbciil45wfb0vr63ayll624fbulqjwne9eokljjm1fuwg1zlmvekff4hvcvrenwftw68bgf7pe5oeknsehgsbm4f5weykb5ba5wxdxv2o3tfuj600zabv636qx46z78w0y7235on7gdu98xfdh6nkmn2u21tauqaa33dlh08c0563kbsdrwvwzwpxu271qtmr7zwq51y29hf7xe1f554a2o06k83nw7e3gw60yn3n95e0k7300mjwcsttjh3nzay5e5aft4yhd3kygio6jn17zkdmnpia5922yqzysrpd6ka16yz32x2efpvdd73c4buewvtnfpmyj277ugw1t0j4q886mfya2mvzwxep2rxvbkkcamtfld9qy09ynyrti6f32iuiddl768r11o9u21s5ap20ekmozc82jwulpzqz9u61dws3gy137tnj1aztkb1u3vsxzmbto5s5p5dtraxfg7xqs4lfpd0o4a20kvn4lg9brt86qmf3bnwzxrnxggu == \k\b\c\i\i\l\4\5\w\f\b\0\v\r\6\3\a\y\l\l\6\2\4\f\b\u\l\q\j\w\n\e\9\e\o\k\l\j\j\m\1\f\u\w\g\1\z\l\m\v\e\k\f\f\4\h\v\c\v\r\e\n\w\f\t\w\6\8\b\g\f\7\p\e\5\o\e\k\n\s\e\h\g\s\b\m\4\f\5\w\e\y\k\b\5\b\a\5\w\x\d\x\v\2\o\3\t\f\u\j\6\0\0\z\a\b\v\6\3\6\q\x\4\6\z\7\8\w\0\y\7\2\3\5\o\n\7\g\d\u\9\8\x\f\d\h\6\n\k\m\n\2\u\2\1\t\a\u\q\a\a\3\3\d\l\h\0\8\c\0\5\6\3\k\b\s\d\r\w\v\w\z\w\p\x\u\2\7\1\q\t\m\r\7\z\w\q\5\1\y\2\9\h\f\7\x\e\1\f\5\5\4\a\2\o\0\6\k\8\3\n\w\7\e\3\g\w\6\0\y\n\3\n\9\5\e\0\k\7\3\0\0\m\j\w\c\s\t\t\j\h\3\n\z\a\y\5\e\5\a\f\t\4\y\h\d\3\k\y\g\i\o\6\j\n\1\7\z\k\d\m\n\p\i\a\5\9\2\2\y\q\z\y\s\r\p\d\6\k\a\1\6\y\z\3\2\x\2\e\f\p\v\d\d\7\3\c\4\b\u\e\w\v\t\n\f\p\m\y\j\2\7\7\u\g\w\1\t\0\j\4\q\8\8\6\m\f\y\a\2\m\v\z\w\x\e\p\2\r\x\v\b\k\k\c\a\m\t\f\l\d\9\q\y\0\9\y\n\y\r\t\i\6\f\3\2\i\u\i\d\d\l\7\6\8\r\1\1\o\9\u\2\1\s\5\a\p\2\0\e\k\m\o\z\c\8\2\j\w\u\l\p\z\q\z\9\u\6\1\d\w\s\3\g\y\1\3\7\t\n\j\1\a\z\t\k\b\1\u\3\v\s\x\z\m\b\t\o\5\s\5\p\5\d\t\r\a\x\f\g\7\x\q\s\4\l\f\p\d\0\o\4\a\2\0\k\v\n\4\l\g\9\b\r\t\8\6\q\m\f\3\b\n\w\z\x\r\n\x\g\g\u ]] 00:08:00.157 00:08:00.157 real 0m1.185s 00:08:00.157 user 0m0.565s 00:08:00.157 sys 0m0.285s 00:08:00.157 07:54:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:00.157 ************************************ 00:08:00.157 END TEST dd_flag_nofollow 00:08:00.157 07:54:05 -- common/autotest_common.sh@10 -- # set +x 00:08:00.157 ************************************ 00:08:00.417 07:54:06 -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:00.417 07:54:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:00.417 07:54:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:00.417 07:54:06 -- common/autotest_common.sh@10 -- # set +x 00:08:00.417 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:00.417 ************************************ 00:08:00.417 START TEST dd_flag_noatime 00:08:00.417 ************************************ 00:08:00.417 07:54:06 -- common/autotest_common.sh@1104 -- # noatime 00:08:00.417 07:54:06 -- dd/posix.sh@53 -- # local atime_if 00:08:00.417 07:54:06 -- dd/posix.sh@54 -- # local atime_of 00:08:00.417 07:54:06 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:00.417 07:54:06 -- dd/common.sh@98 -- # xtrace_disable 00:08:00.417 07:54:06 -- common/autotest_common.sh@10 -- # set +x 00:08:00.417 07:54:06 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:00.417 07:54:06 -- dd/posix.sh@60 -- # atime_if=1720857245 00:08:00.417 07:54:06 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:00.417 07:54:06 -- dd/posix.sh@61 -- # atime_of=1720857245 00:08:00.417 07:54:06 -- dd/posix.sh@66 -- # sleep 1 00:08:01.354 07:54:07 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.354 [2024-07-13 07:54:07.086915] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:01.354 [2024-07-13 07:54:07.087011] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69199 ] 00:08:01.613 [2024-07-13 07:54:07.225622] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.613 [2024-07-13 07:54:07.266552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.871  Copying: 512/512 [B] (average 500 kBps) 00:08:01.871 00:08:01.871 07:54:07 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:01.871 07:54:07 -- dd/posix.sh@69 -- # (( atime_if == 1720857245 )) 00:08:01.871 07:54:07 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.871 07:54:07 -- dd/posix.sh@70 -- # (( atime_of == 1720857245 )) 00:08:01.871 07:54:07 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:01.871 [2024-07-13 07:54:07.544792] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:01.871 [2024-07-13 07:54:07.544900] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69206 ] 00:08:01.871 [2024-07-13 07:54:07.680349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.130 [2024-07-13 07:54:07.712575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.130  Copying: 512/512 [B] (average 500 kBps) 00:08:02.130 00:08:02.130 07:54:07 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.130 07:54:07 -- dd/posix.sh@73 -- # (( atime_if < 1720857247 )) 00:08:02.130 00:08:02.130 real 0m1.890s 00:08:02.130 user 0m0.438s 00:08:02.130 sys 0m0.211s 00:08:02.130 ************************************ 00:08:02.130 END TEST dd_flag_noatime 00:08:02.130 07:54:07 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.130 07:54:07 -- common/autotest_common.sh@10 -- # set +x 00:08:02.130 ************************************ 00:08:02.389 07:54:07 -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:02.389 07:54:07 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:02.389 07:54:07 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:02.389 07:54:07 -- common/autotest_common.sh@10 -- # set +x 00:08:02.389 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:02.389 ************************************ 00:08:02.389 START TEST dd_flags_misc 00:08:02.389 ************************************ 00:08:02.389 07:54:07 -- common/autotest_common.sh@1104 -- # io 00:08:02.389 07:54:07 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:02.389 07:54:07 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:02.389 07:54:07 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:02.389 07:54:07 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:02.389 07:54:07 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:02.389 07:54:07 -- dd/common.sh@98 -- # xtrace_disable 00:08:02.389 07:54:07 -- common/autotest_common.sh@10 -- # set +x 00:08:02.389 07:54:07 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.389 07:54:07 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:02.389 [2024-07-13 07:54:08.011365] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:02.389 [2024-07-13 07:54:08.011460] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69227 ] 00:08:02.389 [2024-07-13 07:54:08.147037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.389 [2024-07-13 07:54:08.179024] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.648  Copying: 512/512 [B] (average 500 kBps) 00:08:02.648 00:08:02.649 07:54:08 -- dd/posix.sh@93 -- # [[ uxkwkqy3p4yaauxjwtvs5w2xaxm2okuj7p4oubv7jh5cazn26qstoxtwm867lgp05829k0omsjsioi0o2ykz7yrvapmnv4epf43noke7fn9jfewhvuxsv0bt4p4aiu6zrz590u6zdf5mpf8us7i68ebgiqz3j3alrdqxpksmvq1q4vdj8mw21dyz0o8qvel3dwvlpg0l23mtfrkat5hxpt56zkrh53fsse1b8on7sydruit8p2rb5r20as57k5ypsmy64d8xr6e4gwmchbghtohljigzz1wm9trly66pesdic97imb3uvjv7x0ep4etfpqu4281k2eakvndzho8wksqgnufvp46xh4rbw6g68xaanj7abn0sfgfbgdheu55jynt0mf8eki7qelqx0ivi07lbldmbfukxng01dqgvyzvfxah1j8oin98sts3nfx55qgpql745mm4hnwnxsjzpgk63lo55jq79huxstuvi8n46disvdo5jmfxw8is9uvje == \u\x\k\w\k\q\y\3\p\4\y\a\a\u\x\j\w\t\v\s\5\w\2\x\a\x\m\2\o\k\u\j\7\p\4\o\u\b\v\7\j\h\5\c\a\z\n\2\6\q\s\t\o\x\t\w\m\8\6\7\l\g\p\0\5\8\2\9\k\0\o\m\s\j\s\i\o\i\0\o\2\y\k\z\7\y\r\v\a\p\m\n\v\4\e\p\f\4\3\n\o\k\e\7\f\n\9\j\f\e\w\h\v\u\x\s\v\0\b\t\4\p\4\a\i\u\6\z\r\z\5\9\0\u\6\z\d\f\5\m\p\f\8\u\s\7\i\6\8\e\b\g\i\q\z\3\j\3\a\l\r\d\q\x\p\k\s\m\v\q\1\q\4\v\d\j\8\m\w\2\1\d\y\z\0\o\8\q\v\e\l\3\d\w\v\l\p\g\0\l\2\3\m\t\f\r\k\a\t\5\h\x\p\t\5\6\z\k\r\h\5\3\f\s\s\e\1\b\8\o\n\7\s\y\d\r\u\i\t\8\p\2\r\b\5\r\2\0\a\s\5\7\k\5\y\p\s\m\y\6\4\d\8\x\r\6\e\4\g\w\m\c\h\b\g\h\t\o\h\l\j\i\g\z\z\1\w\m\9\t\r\l\y\6\6\p\e\s\d\i\c\9\7\i\m\b\3\u\v\j\v\7\x\0\e\p\4\e\t\f\p\q\u\4\2\8\1\k\2\e\a\k\v\n\d\z\h\o\8\w\k\s\q\g\n\u\f\v\p\4\6\x\h\4\r\b\w\6\g\6\8\x\a\a\n\j\7\a\b\n\0\s\f\g\f\b\g\d\h\e\u\5\5\j\y\n\t\0\m\f\8\e\k\i\7\q\e\l\q\x\0\i\v\i\0\7\l\b\l\d\m\b\f\u\k\x\n\g\0\1\d\q\g\v\y\z\v\f\x\a\h\1\j\8\o\i\n\9\8\s\t\s\3\n\f\x\5\5\q\g\p\q\l\7\4\5\m\m\4\h\n\w\n\x\s\j\z\p\g\k\6\3\l\o\5\5\j\q\7\9\h\u\x\s\t\u\v\i\8\n\4\6\d\i\s\v\d\o\5\j\m\f\x\w\8\i\s\9\u\v\j\e ]] 00:08:02.649 07:54:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:02.649 07:54:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:02.649 [2024-07-13 07:54:08.427372] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:02.649 [2024-07-13 07:54:08.427487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69234 ] 00:08:02.909 [2024-07-13 07:54:08.565616] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.909 [2024-07-13 07:54:08.595579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.168  Copying: 512/512 [B] (average 500 kBps) 00:08:03.168 00:08:03.168 07:54:08 -- dd/posix.sh@93 -- # [[ uxkwkqy3p4yaauxjwtvs5w2xaxm2okuj7p4oubv7jh5cazn26qstoxtwm867lgp05829k0omsjsioi0o2ykz7yrvapmnv4epf43noke7fn9jfewhvuxsv0bt4p4aiu6zrz590u6zdf5mpf8us7i68ebgiqz3j3alrdqxpksmvq1q4vdj8mw21dyz0o8qvel3dwvlpg0l23mtfrkat5hxpt56zkrh53fsse1b8on7sydruit8p2rb5r20as57k5ypsmy64d8xr6e4gwmchbghtohljigzz1wm9trly66pesdic97imb3uvjv7x0ep4etfpqu4281k2eakvndzho8wksqgnufvp46xh4rbw6g68xaanj7abn0sfgfbgdheu55jynt0mf8eki7qelqx0ivi07lbldmbfukxng01dqgvyzvfxah1j8oin98sts3nfx55qgpql745mm4hnwnxsjzpgk63lo55jq79huxstuvi8n46disvdo5jmfxw8is9uvje == \u\x\k\w\k\q\y\3\p\4\y\a\a\u\x\j\w\t\v\s\5\w\2\x\a\x\m\2\o\k\u\j\7\p\4\o\u\b\v\7\j\h\5\c\a\z\n\2\6\q\s\t\o\x\t\w\m\8\6\7\l\g\p\0\5\8\2\9\k\0\o\m\s\j\s\i\o\i\0\o\2\y\k\z\7\y\r\v\a\p\m\n\v\4\e\p\f\4\3\n\o\k\e\7\f\n\9\j\f\e\w\h\v\u\x\s\v\0\b\t\4\p\4\a\i\u\6\z\r\z\5\9\0\u\6\z\d\f\5\m\p\f\8\u\s\7\i\6\8\e\b\g\i\q\z\3\j\3\a\l\r\d\q\x\p\k\s\m\v\q\1\q\4\v\d\j\8\m\w\2\1\d\y\z\0\o\8\q\v\e\l\3\d\w\v\l\p\g\0\l\2\3\m\t\f\r\k\a\t\5\h\x\p\t\5\6\z\k\r\h\5\3\f\s\s\e\1\b\8\o\n\7\s\y\d\r\u\i\t\8\p\2\r\b\5\r\2\0\a\s\5\7\k\5\y\p\s\m\y\6\4\d\8\x\r\6\e\4\g\w\m\c\h\b\g\h\t\o\h\l\j\i\g\z\z\1\w\m\9\t\r\l\y\6\6\p\e\s\d\i\c\9\7\i\m\b\3\u\v\j\v\7\x\0\e\p\4\e\t\f\p\q\u\4\2\8\1\k\2\e\a\k\v\n\d\z\h\o\8\w\k\s\q\g\n\u\f\v\p\4\6\x\h\4\r\b\w\6\g\6\8\x\a\a\n\j\7\a\b\n\0\s\f\g\f\b\g\d\h\e\u\5\5\j\y\n\t\0\m\f\8\e\k\i\7\q\e\l\q\x\0\i\v\i\0\7\l\b\l\d\m\b\f\u\k\x\n\g\0\1\d\q\g\v\y\z\v\f\x\a\h\1\j\8\o\i\n\9\8\s\t\s\3\n\f\x\5\5\q\g\p\q\l\7\4\5\m\m\4\h\n\w\n\x\s\j\z\p\g\k\6\3\l\o\5\5\j\q\7\9\h\u\x\s\t\u\v\i\8\n\4\6\d\i\s\v\d\o\5\j\m\f\x\w\8\i\s\9\u\v\j\e ]] 00:08:03.168 07:54:08 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.168 07:54:08 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:03.168 [2024-07-13 07:54:08.833062] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:03.168 [2024-07-13 07:54:08.833163] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69236 ] 00:08:03.168 [2024-07-13 07:54:08.970121] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.428 [2024-07-13 07:54:09.002815] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.428  Copying: 512/512 [B] (average 250 kBps) 00:08:03.428 00:08:03.428 07:54:09 -- dd/posix.sh@93 -- # [[ uxkwkqy3p4yaauxjwtvs5w2xaxm2okuj7p4oubv7jh5cazn26qstoxtwm867lgp05829k0omsjsioi0o2ykz7yrvapmnv4epf43noke7fn9jfewhvuxsv0bt4p4aiu6zrz590u6zdf5mpf8us7i68ebgiqz3j3alrdqxpksmvq1q4vdj8mw21dyz0o8qvel3dwvlpg0l23mtfrkat5hxpt56zkrh53fsse1b8on7sydruit8p2rb5r20as57k5ypsmy64d8xr6e4gwmchbghtohljigzz1wm9trly66pesdic97imb3uvjv7x0ep4etfpqu4281k2eakvndzho8wksqgnufvp46xh4rbw6g68xaanj7abn0sfgfbgdheu55jynt0mf8eki7qelqx0ivi07lbldmbfukxng01dqgvyzvfxah1j8oin98sts3nfx55qgpql745mm4hnwnxsjzpgk63lo55jq79huxstuvi8n46disvdo5jmfxw8is9uvje == \u\x\k\w\k\q\y\3\p\4\y\a\a\u\x\j\w\t\v\s\5\w\2\x\a\x\m\2\o\k\u\j\7\p\4\o\u\b\v\7\j\h\5\c\a\z\n\2\6\q\s\t\o\x\t\w\m\8\6\7\l\g\p\0\5\8\2\9\k\0\o\m\s\j\s\i\o\i\0\o\2\y\k\z\7\y\r\v\a\p\m\n\v\4\e\p\f\4\3\n\o\k\e\7\f\n\9\j\f\e\w\h\v\u\x\s\v\0\b\t\4\p\4\a\i\u\6\z\r\z\5\9\0\u\6\z\d\f\5\m\p\f\8\u\s\7\i\6\8\e\b\g\i\q\z\3\j\3\a\l\r\d\q\x\p\k\s\m\v\q\1\q\4\v\d\j\8\m\w\2\1\d\y\z\0\o\8\q\v\e\l\3\d\w\v\l\p\g\0\l\2\3\m\t\f\r\k\a\t\5\h\x\p\t\5\6\z\k\r\h\5\3\f\s\s\e\1\b\8\o\n\7\s\y\d\r\u\i\t\8\p\2\r\b\5\r\2\0\a\s\5\7\k\5\y\p\s\m\y\6\4\d\8\x\r\6\e\4\g\w\m\c\h\b\g\h\t\o\h\l\j\i\g\z\z\1\w\m\9\t\r\l\y\6\6\p\e\s\d\i\c\9\7\i\m\b\3\u\v\j\v\7\x\0\e\p\4\e\t\f\p\q\u\4\2\8\1\k\2\e\a\k\v\n\d\z\h\o\8\w\k\s\q\g\n\u\f\v\p\4\6\x\h\4\r\b\w\6\g\6\8\x\a\a\n\j\7\a\b\n\0\s\f\g\f\b\g\d\h\e\u\5\5\j\y\n\t\0\m\f\8\e\k\i\7\q\e\l\q\x\0\i\v\i\0\7\l\b\l\d\m\b\f\u\k\x\n\g\0\1\d\q\g\v\y\z\v\f\x\a\h\1\j\8\o\i\n\9\8\s\t\s\3\n\f\x\5\5\q\g\p\q\l\7\4\5\m\m\4\h\n\w\n\x\s\j\z\p\g\k\6\3\l\o\5\5\j\q\7\9\h\u\x\s\t\u\v\i\8\n\4\6\d\i\s\v\d\o\5\j\m\f\x\w\8\i\s\9\u\v\j\e ]] 00:08:03.428 07:54:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.428 07:54:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:03.687 [2024-07-13 07:54:09.250113] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:03.687 [2024-07-13 07:54:09.250214] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69243 ] 00:08:03.687 [2024-07-13 07:54:09.388481] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:03.687 [2024-07-13 07:54:09.419572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.946  Copying: 512/512 [B] (average 250 kBps) 00:08:03.946 00:08:03.946 07:54:09 -- dd/posix.sh@93 -- # [[ uxkwkqy3p4yaauxjwtvs5w2xaxm2okuj7p4oubv7jh5cazn26qstoxtwm867lgp05829k0omsjsioi0o2ykz7yrvapmnv4epf43noke7fn9jfewhvuxsv0bt4p4aiu6zrz590u6zdf5mpf8us7i68ebgiqz3j3alrdqxpksmvq1q4vdj8mw21dyz0o8qvel3dwvlpg0l23mtfrkat5hxpt56zkrh53fsse1b8on7sydruit8p2rb5r20as57k5ypsmy64d8xr6e4gwmchbghtohljigzz1wm9trly66pesdic97imb3uvjv7x0ep4etfpqu4281k2eakvndzho8wksqgnufvp46xh4rbw6g68xaanj7abn0sfgfbgdheu55jynt0mf8eki7qelqx0ivi07lbldmbfukxng01dqgvyzvfxah1j8oin98sts3nfx55qgpql745mm4hnwnxsjzpgk63lo55jq79huxstuvi8n46disvdo5jmfxw8is9uvje == \u\x\k\w\k\q\y\3\p\4\y\a\a\u\x\j\w\t\v\s\5\w\2\x\a\x\m\2\o\k\u\j\7\p\4\o\u\b\v\7\j\h\5\c\a\z\n\2\6\q\s\t\o\x\t\w\m\8\6\7\l\g\p\0\5\8\2\9\k\0\o\m\s\j\s\i\o\i\0\o\2\y\k\z\7\y\r\v\a\p\m\n\v\4\e\p\f\4\3\n\o\k\e\7\f\n\9\j\f\e\w\h\v\u\x\s\v\0\b\t\4\p\4\a\i\u\6\z\r\z\5\9\0\u\6\z\d\f\5\m\p\f\8\u\s\7\i\6\8\e\b\g\i\q\z\3\j\3\a\l\r\d\q\x\p\k\s\m\v\q\1\q\4\v\d\j\8\m\w\2\1\d\y\z\0\o\8\q\v\e\l\3\d\w\v\l\p\g\0\l\2\3\m\t\f\r\k\a\t\5\h\x\p\t\5\6\z\k\r\h\5\3\f\s\s\e\1\b\8\o\n\7\s\y\d\r\u\i\t\8\p\2\r\b\5\r\2\0\a\s\5\7\k\5\y\p\s\m\y\6\4\d\8\x\r\6\e\4\g\w\m\c\h\b\g\h\t\o\h\l\j\i\g\z\z\1\w\m\9\t\r\l\y\6\6\p\e\s\d\i\c\9\7\i\m\b\3\u\v\j\v\7\x\0\e\p\4\e\t\f\p\q\u\4\2\8\1\k\2\e\a\k\v\n\d\z\h\o\8\w\k\s\q\g\n\u\f\v\p\4\6\x\h\4\r\b\w\6\g\6\8\x\a\a\n\j\7\a\b\n\0\s\f\g\f\b\g\d\h\e\u\5\5\j\y\n\t\0\m\f\8\e\k\i\7\q\e\l\q\x\0\i\v\i\0\7\l\b\l\d\m\b\f\u\k\x\n\g\0\1\d\q\g\v\y\z\v\f\x\a\h\1\j\8\o\i\n\9\8\s\t\s\3\n\f\x\5\5\q\g\p\q\l\7\4\5\m\m\4\h\n\w\n\x\s\j\z\p\g\k\6\3\l\o\5\5\j\q\7\9\h\u\x\s\t\u\v\i\8\n\4\6\d\i\s\v\d\o\5\j\m\f\x\w\8\i\s\9\u\v\j\e ]] 00:08:03.946 07:54:09 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:03.946 07:54:09 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:03.946 07:54:09 -- dd/common.sh@98 -- # xtrace_disable 00:08:03.946 07:54:09 -- common/autotest_common.sh@10 -- # set +x 00:08:03.946 07:54:09 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:03.946 07:54:09 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:03.946 [2024-07-13 07:54:09.677083] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:03.946 [2024-07-13 07:54:09.677178] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69245 ] 00:08:04.206 [2024-07-13 07:54:09.815194] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.206 [2024-07-13 07:54:09.847038] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.465  Copying: 512/512 [B] (average 500 kBps) 00:08:04.465 00:08:04.465 07:54:10 -- dd/posix.sh@93 -- # [[ grv56mfloj1edcdgd829ssz55z8qvnaknnuyqukhlu2zzuwi70qrrn91jzzysny9huov4r3m0iqjca598csmxk374yd4uu10iieo1iw56ta37ma2m8496acxomdol5o288j4ptjm3tdfgjbidbmpjkytrlbchka5yjqi9rzq6k7iloqy3aziqp2fc844mir7fjzapn3e7xropg4zfsvzs2y7i3buqtk6bmp27tjv0zlt45dlnu5vmy3a01dlzdf5a9704n718hu8ws7gxm4977jfijndx7wkgbwfttfa2yvxgmllu3kon9eax3herwyvin74av17o3yj1jak20jo4vov7t0ggjyzi2ffx6sq26ip62nvolwxly94wy8fdgbe1meu3ki0vophjqhfpkuycpt7isv9m1ud0130vco4sxxuxow91kfvtm3q3mttei8mcvt6ax99gkxz0a2z643qrt3kbk54eqng1wnsvpmd5j6y4fl0r8c7o4p01wxqkaco == \g\r\v\5\6\m\f\l\o\j\1\e\d\c\d\g\d\8\2\9\s\s\z\5\5\z\8\q\v\n\a\k\n\n\u\y\q\u\k\h\l\u\2\z\z\u\w\i\7\0\q\r\r\n\9\1\j\z\z\y\s\n\y\9\h\u\o\v\4\r\3\m\0\i\q\j\c\a\5\9\8\c\s\m\x\k\3\7\4\y\d\4\u\u\1\0\i\i\e\o\1\i\w\5\6\t\a\3\7\m\a\2\m\8\4\9\6\a\c\x\o\m\d\o\l\5\o\2\8\8\j\4\p\t\j\m\3\t\d\f\g\j\b\i\d\b\m\p\j\k\y\t\r\l\b\c\h\k\a\5\y\j\q\i\9\r\z\q\6\k\7\i\l\o\q\y\3\a\z\i\q\p\2\f\c\8\4\4\m\i\r\7\f\j\z\a\p\n\3\e\7\x\r\o\p\g\4\z\f\s\v\z\s\2\y\7\i\3\b\u\q\t\k\6\b\m\p\2\7\t\j\v\0\z\l\t\4\5\d\l\n\u\5\v\m\y\3\a\0\1\d\l\z\d\f\5\a\9\7\0\4\n\7\1\8\h\u\8\w\s\7\g\x\m\4\9\7\7\j\f\i\j\n\d\x\7\w\k\g\b\w\f\t\t\f\a\2\y\v\x\g\m\l\l\u\3\k\o\n\9\e\a\x\3\h\e\r\w\y\v\i\n\7\4\a\v\1\7\o\3\y\j\1\j\a\k\2\0\j\o\4\v\o\v\7\t\0\g\g\j\y\z\i\2\f\f\x\6\s\q\2\6\i\p\6\2\n\v\o\l\w\x\l\y\9\4\w\y\8\f\d\g\b\e\1\m\e\u\3\k\i\0\v\o\p\h\j\q\h\f\p\k\u\y\c\p\t\7\i\s\v\9\m\1\u\d\0\1\3\0\v\c\o\4\s\x\x\u\x\o\w\9\1\k\f\v\t\m\3\q\3\m\t\t\e\i\8\m\c\v\t\6\a\x\9\9\g\k\x\z\0\a\2\z\6\4\3\q\r\t\3\k\b\k\5\4\e\q\n\g\1\w\n\s\v\p\m\d\5\j\6\y\4\f\l\0\r\8\c\7\o\4\p\0\1\w\x\q\k\a\c\o ]] 00:08:04.465 07:54:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:04.465 07:54:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:04.465 [2024-07-13 07:54:10.087097] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:04.465 [2024-07-13 07:54:10.087191] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69247 ] 00:08:04.465 [2024-07-13 07:54:10.223929] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.465 [2024-07-13 07:54:10.256129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.724  Copying: 512/512 [B] (average 500 kBps) 00:08:04.724 00:08:04.724 07:54:10 -- dd/posix.sh@93 -- # [[ grv56mfloj1edcdgd829ssz55z8qvnaknnuyqukhlu2zzuwi70qrrn91jzzysny9huov4r3m0iqjca598csmxk374yd4uu10iieo1iw56ta37ma2m8496acxomdol5o288j4ptjm3tdfgjbidbmpjkytrlbchka5yjqi9rzq6k7iloqy3aziqp2fc844mir7fjzapn3e7xropg4zfsvzs2y7i3buqtk6bmp27tjv0zlt45dlnu5vmy3a01dlzdf5a9704n718hu8ws7gxm4977jfijndx7wkgbwfttfa2yvxgmllu3kon9eax3herwyvin74av17o3yj1jak20jo4vov7t0ggjyzi2ffx6sq26ip62nvolwxly94wy8fdgbe1meu3ki0vophjqhfpkuycpt7isv9m1ud0130vco4sxxuxow91kfvtm3q3mttei8mcvt6ax99gkxz0a2z643qrt3kbk54eqng1wnsvpmd5j6y4fl0r8c7o4p01wxqkaco == \g\r\v\5\6\m\f\l\o\j\1\e\d\c\d\g\d\8\2\9\s\s\z\5\5\z\8\q\v\n\a\k\n\n\u\y\q\u\k\h\l\u\2\z\z\u\w\i\7\0\q\r\r\n\9\1\j\z\z\y\s\n\y\9\h\u\o\v\4\r\3\m\0\i\q\j\c\a\5\9\8\c\s\m\x\k\3\7\4\y\d\4\u\u\1\0\i\i\e\o\1\i\w\5\6\t\a\3\7\m\a\2\m\8\4\9\6\a\c\x\o\m\d\o\l\5\o\2\8\8\j\4\p\t\j\m\3\t\d\f\g\j\b\i\d\b\m\p\j\k\y\t\r\l\b\c\h\k\a\5\y\j\q\i\9\r\z\q\6\k\7\i\l\o\q\y\3\a\z\i\q\p\2\f\c\8\4\4\m\i\r\7\f\j\z\a\p\n\3\e\7\x\r\o\p\g\4\z\f\s\v\z\s\2\y\7\i\3\b\u\q\t\k\6\b\m\p\2\7\t\j\v\0\z\l\t\4\5\d\l\n\u\5\v\m\y\3\a\0\1\d\l\z\d\f\5\a\9\7\0\4\n\7\1\8\h\u\8\w\s\7\g\x\m\4\9\7\7\j\f\i\j\n\d\x\7\w\k\g\b\w\f\t\t\f\a\2\y\v\x\g\m\l\l\u\3\k\o\n\9\e\a\x\3\h\e\r\w\y\v\i\n\7\4\a\v\1\7\o\3\y\j\1\j\a\k\2\0\j\o\4\v\o\v\7\t\0\g\g\j\y\z\i\2\f\f\x\6\s\q\2\6\i\p\6\2\n\v\o\l\w\x\l\y\9\4\w\y\8\f\d\g\b\e\1\m\e\u\3\k\i\0\v\o\p\h\j\q\h\f\p\k\u\y\c\p\t\7\i\s\v\9\m\1\u\d\0\1\3\0\v\c\o\4\s\x\x\u\x\o\w\9\1\k\f\v\t\m\3\q\3\m\t\t\e\i\8\m\c\v\t\6\a\x\9\9\g\k\x\z\0\a\2\z\6\4\3\q\r\t\3\k\b\k\5\4\e\q\n\g\1\w\n\s\v\p\m\d\5\j\6\y\4\f\l\0\r\8\c\7\o\4\p\0\1\w\x\q\k\a\c\o ]] 00:08:04.724 07:54:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:04.724 07:54:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:04.724 [2024-07-13 07:54:10.493136] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:04.724 [2024-07-13 07:54:10.493234] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69254 ] 00:08:04.983 [2024-07-13 07:54:10.630530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.983 [2024-07-13 07:54:10.661698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.242  Copying: 512/512 [B] (average 500 kBps) 00:08:05.242 00:08:05.242 07:54:10 -- dd/posix.sh@93 -- # [[ grv56mfloj1edcdgd829ssz55z8qvnaknnuyqukhlu2zzuwi70qrrn91jzzysny9huov4r3m0iqjca598csmxk374yd4uu10iieo1iw56ta37ma2m8496acxomdol5o288j4ptjm3tdfgjbidbmpjkytrlbchka5yjqi9rzq6k7iloqy3aziqp2fc844mir7fjzapn3e7xropg4zfsvzs2y7i3buqtk6bmp27tjv0zlt45dlnu5vmy3a01dlzdf5a9704n718hu8ws7gxm4977jfijndx7wkgbwfttfa2yvxgmllu3kon9eax3herwyvin74av17o3yj1jak20jo4vov7t0ggjyzi2ffx6sq26ip62nvolwxly94wy8fdgbe1meu3ki0vophjqhfpkuycpt7isv9m1ud0130vco4sxxuxow91kfvtm3q3mttei8mcvt6ax99gkxz0a2z643qrt3kbk54eqng1wnsvpmd5j6y4fl0r8c7o4p01wxqkaco == \g\r\v\5\6\m\f\l\o\j\1\e\d\c\d\g\d\8\2\9\s\s\z\5\5\z\8\q\v\n\a\k\n\n\u\y\q\u\k\h\l\u\2\z\z\u\w\i\7\0\q\r\r\n\9\1\j\z\z\y\s\n\y\9\h\u\o\v\4\r\3\m\0\i\q\j\c\a\5\9\8\c\s\m\x\k\3\7\4\y\d\4\u\u\1\0\i\i\e\o\1\i\w\5\6\t\a\3\7\m\a\2\m\8\4\9\6\a\c\x\o\m\d\o\l\5\o\2\8\8\j\4\p\t\j\m\3\t\d\f\g\j\b\i\d\b\m\p\j\k\y\t\r\l\b\c\h\k\a\5\y\j\q\i\9\r\z\q\6\k\7\i\l\o\q\y\3\a\z\i\q\p\2\f\c\8\4\4\m\i\r\7\f\j\z\a\p\n\3\e\7\x\r\o\p\g\4\z\f\s\v\z\s\2\y\7\i\3\b\u\q\t\k\6\b\m\p\2\7\t\j\v\0\z\l\t\4\5\d\l\n\u\5\v\m\y\3\a\0\1\d\l\z\d\f\5\a\9\7\0\4\n\7\1\8\h\u\8\w\s\7\g\x\m\4\9\7\7\j\f\i\j\n\d\x\7\w\k\g\b\w\f\t\t\f\a\2\y\v\x\g\m\l\l\u\3\k\o\n\9\e\a\x\3\h\e\r\w\y\v\i\n\7\4\a\v\1\7\o\3\y\j\1\j\a\k\2\0\j\o\4\v\o\v\7\t\0\g\g\j\y\z\i\2\f\f\x\6\s\q\2\6\i\p\6\2\n\v\o\l\w\x\l\y\9\4\w\y\8\f\d\g\b\e\1\m\e\u\3\k\i\0\v\o\p\h\j\q\h\f\p\k\u\y\c\p\t\7\i\s\v\9\m\1\u\d\0\1\3\0\v\c\o\4\s\x\x\u\x\o\w\9\1\k\f\v\t\m\3\q\3\m\t\t\e\i\8\m\c\v\t\6\a\x\9\9\g\k\x\z\0\a\2\z\6\4\3\q\r\t\3\k\b\k\5\4\e\q\n\g\1\w\n\s\v\p\m\d\5\j\6\y\4\f\l\0\r\8\c\7\o\4\p\0\1\w\x\q\k\a\c\o ]] 00:08:05.242 07:54:10 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:05.242 07:54:10 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:05.242 [2024-07-13 07:54:10.900702] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:05.242 [2024-07-13 07:54:10.900812] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69256 ] 00:08:05.242 [2024-07-13 07:54:11.035534] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.501 [2024-07-13 07:54:11.067423] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.501  Copying: 512/512 [B] (average 166 kBps) 00:08:05.501 00:08:05.502 ************************************ 00:08:05.502 END TEST dd_flags_misc 00:08:05.502 ************************************ 00:08:05.502 07:54:11 -- dd/posix.sh@93 -- # [[ grv56mfloj1edcdgd829ssz55z8qvnaknnuyqukhlu2zzuwi70qrrn91jzzysny9huov4r3m0iqjca598csmxk374yd4uu10iieo1iw56ta37ma2m8496acxomdol5o288j4ptjm3tdfgjbidbmpjkytrlbchka5yjqi9rzq6k7iloqy3aziqp2fc844mir7fjzapn3e7xropg4zfsvzs2y7i3buqtk6bmp27tjv0zlt45dlnu5vmy3a01dlzdf5a9704n718hu8ws7gxm4977jfijndx7wkgbwfttfa2yvxgmllu3kon9eax3herwyvin74av17o3yj1jak20jo4vov7t0ggjyzi2ffx6sq26ip62nvolwxly94wy8fdgbe1meu3ki0vophjqhfpkuycpt7isv9m1ud0130vco4sxxuxow91kfvtm3q3mttei8mcvt6ax99gkxz0a2z643qrt3kbk54eqng1wnsvpmd5j6y4fl0r8c7o4p01wxqkaco == \g\r\v\5\6\m\f\l\o\j\1\e\d\c\d\g\d\8\2\9\s\s\z\5\5\z\8\q\v\n\a\k\n\n\u\y\q\u\k\h\l\u\2\z\z\u\w\i\7\0\q\r\r\n\9\1\j\z\z\y\s\n\y\9\h\u\o\v\4\r\3\m\0\i\q\j\c\a\5\9\8\c\s\m\x\k\3\7\4\y\d\4\u\u\1\0\i\i\e\o\1\i\w\5\6\t\a\3\7\m\a\2\m\8\4\9\6\a\c\x\o\m\d\o\l\5\o\2\8\8\j\4\p\t\j\m\3\t\d\f\g\j\b\i\d\b\m\p\j\k\y\t\r\l\b\c\h\k\a\5\y\j\q\i\9\r\z\q\6\k\7\i\l\o\q\y\3\a\z\i\q\p\2\f\c\8\4\4\m\i\r\7\f\j\z\a\p\n\3\e\7\x\r\o\p\g\4\z\f\s\v\z\s\2\y\7\i\3\b\u\q\t\k\6\b\m\p\2\7\t\j\v\0\z\l\t\4\5\d\l\n\u\5\v\m\y\3\a\0\1\d\l\z\d\f\5\a\9\7\0\4\n\7\1\8\h\u\8\w\s\7\g\x\m\4\9\7\7\j\f\i\j\n\d\x\7\w\k\g\b\w\f\t\t\f\a\2\y\v\x\g\m\l\l\u\3\k\o\n\9\e\a\x\3\h\e\r\w\y\v\i\n\7\4\a\v\1\7\o\3\y\j\1\j\a\k\2\0\j\o\4\v\o\v\7\t\0\g\g\j\y\z\i\2\f\f\x\6\s\q\2\6\i\p\6\2\n\v\o\l\w\x\l\y\9\4\w\y\8\f\d\g\b\e\1\m\e\u\3\k\i\0\v\o\p\h\j\q\h\f\p\k\u\y\c\p\t\7\i\s\v\9\m\1\u\d\0\1\3\0\v\c\o\4\s\x\x\u\x\o\w\9\1\k\f\v\t\m\3\q\3\m\t\t\e\i\8\m\c\v\t\6\a\x\9\9\g\k\x\z\0\a\2\z\6\4\3\q\r\t\3\k\b\k\5\4\e\q\n\g\1\w\n\s\v\p\m\d\5\j\6\y\4\f\l\0\r\8\c\7\o\4\p\0\1\w\x\q\k\a\c\o ]] 00:08:05.502 00:08:05.502 real 0m3.306s 00:08:05.502 user 0m1.630s 00:08:05.502 sys 0m0.699s 00:08:05.502 07:54:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.502 07:54:11 -- common/autotest_common.sh@10 -- # set +x 00:08:05.502 07:54:11 -- dd/posix.sh@131 -- # tests_forced_aio 00:08:05.502 07:54:11 -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:05.502 * Second test run, disabling liburing, forcing AIO 00:08:05.502 07:54:11 -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:05.502 07:54:11 -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:05.502 07:54:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:05.502 07:54:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:05.502 07:54:11 -- common/autotest_common.sh@10 -- # set +x 00:08:05.761 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:05.761 ************************************ 00:08:05.761 START TEST dd_flag_append_forced_aio 00:08:05.761 ************************************ 00:08:05.761 07:54:11 -- common/autotest_common.sh@1104 -- # append 00:08:05.761 07:54:11 -- dd/posix.sh@16 -- # local dump0 00:08:05.761 07:54:11 -- dd/posix.sh@17 -- # local dump1 00:08:05.761 07:54:11 -- dd/posix.sh@19 -- # gen_bytes 32 00:08:05.761 07:54:11 -- dd/common.sh@98 -- # xtrace_disable 00:08:05.761 07:54:11 -- common/autotest_common.sh@10 -- # set +x 00:08:05.761 07:54:11 -- dd/posix.sh@19 -- # dump0=75jx1eaz9s5g6a3dm5e7pd9gef63xcjq 00:08:05.761 07:54:11 -- dd/posix.sh@20 -- # gen_bytes 32 00:08:05.761 07:54:11 -- dd/common.sh@98 -- # xtrace_disable 00:08:05.761 07:54:11 -- common/autotest_common.sh@10 -- # set +x 00:08:05.761 07:54:11 -- dd/posix.sh@20 -- # dump1=4npeamcw52xc317sqryqa6zrvx3qjm88 00:08:05.761 07:54:11 -- dd/posix.sh@22 -- # printf %s 75jx1eaz9s5g6a3dm5e7pd9gef63xcjq 00:08:05.761 07:54:11 -- dd/posix.sh@23 -- # printf %s 4npeamcw52xc317sqryqa6zrvx3qjm88 00:08:05.761 07:54:11 -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:05.761 [2024-07-13 07:54:11.379261] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:05.761 [2024-07-13 07:54:11.379362] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69282 ] 00:08:05.761 [2024-07-13 07:54:11.515107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.761 [2024-07-13 07:54:11.550496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.021  Copying: 32/32 [B] (average 31 kBps) 00:08:06.021 00:08:06.021 07:54:11 -- dd/posix.sh@27 -- # [[ 4npeamcw52xc317sqryqa6zrvx3qjm8875jx1eaz9s5g6a3dm5e7pd9gef63xcjq == \4\n\p\e\a\m\c\w\5\2\x\c\3\1\7\s\q\r\y\q\a\6\z\r\v\x\3\q\j\m\8\8\7\5\j\x\1\e\a\z\9\s\5\g\6\a\3\d\m\5\e\7\p\d\9\g\e\f\6\3\x\c\j\q ]] 00:08:06.021 00:08:06.021 real 0m0.427s 00:08:06.021 user 0m0.205s 00:08:06.021 sys 0m0.098s 00:08:06.021 07:54:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:06.021 07:54:11 -- common/autotest_common.sh@10 -- # set +x 00:08:06.021 ************************************ 00:08:06.021 END TEST dd_flag_append_forced_aio 00:08:06.021 ************************************ 00:08:06.021 07:54:11 -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:06.021 07:54:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:06.021 07:54:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:06.021 07:54:11 -- common/autotest_common.sh@10 -- # set +x 00:08:06.021 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:06.021 ************************************ 00:08:06.021 START TEST dd_flag_directory_forced_aio 00:08:06.021 ************************************ 00:08:06.021 07:54:11 -- common/autotest_common.sh@1104 -- # directory 00:08:06.021 07:54:11 -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:06.021 07:54:11 -- common/autotest_common.sh@640 -- # local es=0 00:08:06.021 07:54:11 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:06.021 07:54:11 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.021 07:54:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:06.021 07:54:11 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.021 07:54:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:06.021 07:54:11 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.021 07:54:11 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:06.021 07:54:11 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.021 07:54:11 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.021 07:54:11 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:06.280 [2024-07-13 07:54:11.855472] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:06.281 [2024-07-13 07:54:11.855552] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69303 ] 00:08:06.281 [2024-07-13 07:54:11.991674] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.281 [2024-07-13 07:54:12.023058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.281 [2024-07-13 07:54:12.063778] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:06.281 [2024-07-13 07:54:12.063863] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:06.281 [2024-07-13 07:54:12.063894] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.539 [2024-07-13 07:54:12.120654] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:06.540 07:54:12 -- common/autotest_common.sh@643 -- # es=236 00:08:06.540 07:54:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:06.540 07:54:12 -- common/autotest_common.sh@652 -- # es=108 00:08:06.540 07:54:12 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:06.540 07:54:12 -- common/autotest_common.sh@660 -- # es=1 00:08:06.540 07:54:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:06.540 07:54:12 -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:06.540 07:54:12 -- common/autotest_common.sh@640 -- # local es=0 00:08:06.540 07:54:12 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:06.540 07:54:12 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.540 07:54:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:06.540 07:54:12 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.540 07:54:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:06.540 07:54:12 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.540 07:54:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:06.540 07:54:12 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.540 07:54:12 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.540 07:54:12 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:06.540 [2024-07-13 07:54:12.254854] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:06.540 [2024-07-13 07:54:12.254973] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69307 ] 00:08:06.798 [2024-07-13 07:54:12.394012] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.798 [2024-07-13 07:54:12.431996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.798 [2024-07-13 07:54:12.479433] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:06.798 [2024-07-13 07:54:12.479498] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:06.798 [2024-07-13 07:54:12.479519] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.798 [2024-07-13 07:54:12.540751] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:06.798 07:54:12 -- common/autotest_common.sh@643 -- # es=236 00:08:06.798 07:54:12 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:06.798 07:54:12 -- common/autotest_common.sh@652 -- # es=108 00:08:06.798 07:54:12 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:06.798 07:54:12 -- common/autotest_common.sh@660 -- # es=1 00:08:06.798 07:54:12 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:06.798 00:08:06.798 real 0m0.809s 00:08:06.798 user 0m0.415s 00:08:06.798 sys 0m0.184s 00:08:06.798 07:54:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.060 ************************************ 00:08:07.060 END TEST dd_flag_directory_forced_aio 00:08:07.060 ************************************ 00:08:07.060 07:54:12 -- common/autotest_common.sh@10 -- # set +x 00:08:07.060 07:54:12 -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:07.060 07:54:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:07.060 07:54:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:07.060 07:54:12 -- common/autotest_common.sh@10 -- # set +x 00:08:07.060 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:07.060 ************************************ 00:08:07.060 START TEST dd_flag_nofollow_forced_aio 00:08:07.060 ************************************ 00:08:07.060 07:54:12 -- common/autotest_common.sh@1104 -- # nofollow 00:08:07.060 07:54:12 -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:07.060 07:54:12 -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:07.060 07:54:12 -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:07.060 07:54:12 -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:07.060 07:54:12 -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.060 07:54:12 -- common/autotest_common.sh@640 -- # local es=0 00:08:07.060 07:54:12 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.060 07:54:12 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.060 07:54:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:07.060 07:54:12 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.060 07:54:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:07.060 07:54:12 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.060 07:54:12 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:07.060 07:54:12 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.060 07:54:12 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.060 07:54:12 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.060 [2024-07-13 07:54:12.723379] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:07.060 [2024-07-13 07:54:12.723484] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69335 ] 00:08:07.060 [2024-07-13 07:54:12.860900] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.326 [2024-07-13 07:54:12.900465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.326 [2024-07-13 07:54:12.949124] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:07.326 [2024-07-13 07:54:12.949191] spdk_dd.c:1067:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:07.326 [2024-07-13 07:54:12.949210] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.326 [2024-07-13 07:54:13.019325] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:07.326 07:54:13 -- common/autotest_common.sh@643 -- # es=216 00:08:07.326 07:54:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:07.326 07:54:13 -- common/autotest_common.sh@652 -- # es=88 00:08:07.326 07:54:13 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:07.326 07:54:13 -- common/autotest_common.sh@660 -- # es=1 00:08:07.326 07:54:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:07.326 07:54:13 -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:07.326 07:54:13 -- common/autotest_common.sh@640 -- # local es=0 00:08:07.326 07:54:13 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:07.326 07:54:13 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.326 07:54:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:07.326 07:54:13 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.326 07:54:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:07.326 07:54:13 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.326 07:54:13 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:07.326 07:54:13 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:07.326 07:54:13 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:07.326 07:54:13 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:07.584 [2024-07-13 07:54:13.156854] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:07.584 [2024-07-13 07:54:13.156942] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69339 ] 00:08:07.584 [2024-07-13 07:54:13.294021] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.584 [2024-07-13 07:54:13.323290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.584 [2024-07-13 07:54:13.364909] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:07.584 [2024-07-13 07:54:13.364956] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:07.584 [2024-07-13 07:54:13.364987] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:07.841 [2024-07-13 07:54:13.423265] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:07.841 07:54:13 -- common/autotest_common.sh@643 -- # es=216 00:08:07.841 07:54:13 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:07.841 07:54:13 -- common/autotest_common.sh@652 -- # es=88 00:08:07.841 07:54:13 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:07.841 07:54:13 -- common/autotest_common.sh@660 -- # es=1 00:08:07.841 07:54:13 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:07.841 07:54:13 -- dd/posix.sh@46 -- # gen_bytes 512 00:08:07.841 07:54:13 -- dd/common.sh@98 -- # xtrace_disable 00:08:07.841 07:54:13 -- common/autotest_common.sh@10 -- # set +x 00:08:07.842 07:54:13 -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:07.842 [2024-07-13 07:54:13.545045] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:07.842 [2024-07-13 07:54:13.545135] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69346 ] 00:08:08.100 [2024-07-13 07:54:13.675270] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.100 [2024-07-13 07:54:13.704669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.100  Copying: 512/512 [B] (average 500 kBps) 00:08:08.100 00:08:08.100 07:54:13 -- dd/posix.sh@49 -- # [[ ayx3gox5sovn4a5j0rl8r4phji5j9n4xi50cfrrry3anmiigghopjdemgzuu3xwk2s6walqyyb5thb5invpghpya22mndpo4169icqve7zv9dq2meydzj2y8pco5a1yuxnd5flk55cj8stm5zvwe9mcwz6hmp7bzqamda7ubcl0y0wl8vfd1e5b5h0y7a249qbb0uslpldvtrw8njjvnyfrew04t5c1czcn5yxgp7dquwjy3r79dn312cpczs9a8p71m388eloez036c2pj1qf4cjw5qw23dzh5ry3sxayknqzyfhpnbetq6rknp2t6os9bnbiqjfo2bg6zff521tz5p1dk0udqxv9r6r6y5txtpz7pkpipjwvi6qihvys632ql4kf6ld84kuduqu6qi9a9oioi8szxlsqofpae0uli8henif7m5gerjkcb48u34kcr2cbxgrhp08lqob8hs6p9yvao5c81dz8o5clcpnkt324kt2l1jddh5ly3nqpk1 == \a\y\x\3\g\o\x\5\s\o\v\n\4\a\5\j\0\r\l\8\r\4\p\h\j\i\5\j\9\n\4\x\i\5\0\c\f\r\r\r\y\3\a\n\m\i\i\g\g\h\o\p\j\d\e\m\g\z\u\u\3\x\w\k\2\s\6\w\a\l\q\y\y\b\5\t\h\b\5\i\n\v\p\g\h\p\y\a\2\2\m\n\d\p\o\4\1\6\9\i\c\q\v\e\7\z\v\9\d\q\2\m\e\y\d\z\j\2\y\8\p\c\o\5\a\1\y\u\x\n\d\5\f\l\k\5\5\c\j\8\s\t\m\5\z\v\w\e\9\m\c\w\z\6\h\m\p\7\b\z\q\a\m\d\a\7\u\b\c\l\0\y\0\w\l\8\v\f\d\1\e\5\b\5\h\0\y\7\a\2\4\9\q\b\b\0\u\s\l\p\l\d\v\t\r\w\8\n\j\j\v\n\y\f\r\e\w\0\4\t\5\c\1\c\z\c\n\5\y\x\g\p\7\d\q\u\w\j\y\3\r\7\9\d\n\3\1\2\c\p\c\z\s\9\a\8\p\7\1\m\3\8\8\e\l\o\e\z\0\3\6\c\2\p\j\1\q\f\4\c\j\w\5\q\w\2\3\d\z\h\5\r\y\3\s\x\a\y\k\n\q\z\y\f\h\p\n\b\e\t\q\6\r\k\n\p\2\t\6\o\s\9\b\n\b\i\q\j\f\o\2\b\g\6\z\f\f\5\2\1\t\z\5\p\1\d\k\0\u\d\q\x\v\9\r\6\r\6\y\5\t\x\t\p\z\7\p\k\p\i\p\j\w\v\i\6\q\i\h\v\y\s\6\3\2\q\l\4\k\f\6\l\d\8\4\k\u\d\u\q\u\6\q\i\9\a\9\o\i\o\i\8\s\z\x\l\s\q\o\f\p\a\e\0\u\l\i\8\h\e\n\i\f\7\m\5\g\e\r\j\k\c\b\4\8\u\3\4\k\c\r\2\c\b\x\g\r\h\p\0\8\l\q\o\b\8\h\s\6\p\9\y\v\a\o\5\c\8\1\d\z\8\o\5\c\l\c\p\n\k\t\3\2\4\k\t\2\l\1\j\d\d\h\5\l\y\3\n\q\p\k\1 ]] 00:08:08.100 00:08:08.100 real 0m1.231s 00:08:08.100 user 0m0.625s 00:08:08.100 sys 0m0.275s 00:08:08.100 07:54:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.100 ************************************ 00:08:08.100 END TEST dd_flag_nofollow_forced_aio 00:08:08.100 ************************************ 00:08:08.100 07:54:13 -- common/autotest_common.sh@10 -- # set +x 00:08:08.358 07:54:13 -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:08.358 07:54:13 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:08.359 07:54:13 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:08.359 07:54:13 -- common/autotest_common.sh@10 -- # set +x 00:08:08.359 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:08.359 ************************************ 00:08:08.359 START TEST dd_flag_noatime_forced_aio 00:08:08.359 ************************************ 00:08:08.359 07:54:13 -- common/autotest_common.sh@1104 -- # noatime 00:08:08.359 07:54:13 -- dd/posix.sh@53 -- # local atime_if 00:08:08.359 07:54:13 -- dd/posix.sh@54 -- # local atime_of 00:08:08.359 07:54:13 -- dd/posix.sh@58 -- # gen_bytes 512 00:08:08.359 07:54:13 -- dd/common.sh@98 -- # xtrace_disable 00:08:08.359 07:54:13 -- common/autotest_common.sh@10 -- # set +x 00:08:08.359 07:54:13 -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:08.359 07:54:13 -- dd/posix.sh@60 -- # atime_if=1720857253 00:08:08.359 07:54:13 -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:08.359 07:54:13 -- dd/posix.sh@61 -- # atime_of=1720857253 00:08:08.359 07:54:13 -- dd/posix.sh@66 -- # sleep 1 00:08:09.295 07:54:14 -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.295 [2024-07-13 07:54:15.022105] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:09.295 [2024-07-13 07:54:15.022228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69375 ] 00:08:09.553 [2024-07-13 07:54:15.159524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.553 [2024-07-13 07:54:15.197809] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.812  Copying: 512/512 [B] (average 500 kBps) 00:08:09.812 00:08:09.812 07:54:15 -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:09.812 07:54:15 -- dd/posix.sh@69 -- # (( atime_if == 1720857253 )) 00:08:09.812 07:54:15 -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.812 07:54:15 -- dd/posix.sh@70 -- # (( atime_of == 1720857253 )) 00:08:09.812 07:54:15 -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.812 [2024-07-13 07:54:15.450321] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:09.812 [2024-07-13 07:54:15.450407] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69381 ] 00:08:09.812 [2024-07-13 07:54:15.589595] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.812 [2024-07-13 07:54:15.625768] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.071  Copying: 512/512 [B] (average 500 kBps) 00:08:10.071 00:08:10.071 07:54:15 -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:10.071 07:54:15 -- dd/posix.sh@73 -- # (( atime_if < 1720857255 )) 00:08:10.071 00:08:10.071 real 0m1.868s 00:08:10.071 user 0m0.436s 00:08:10.071 sys 0m0.195s 00:08:10.071 ************************************ 00:08:10.071 END TEST dd_flag_noatime_forced_aio 00:08:10.071 ************************************ 00:08:10.071 07:54:15 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.071 07:54:15 -- common/autotest_common.sh@10 -- # set +x 00:08:10.071 07:54:15 -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:10.072 07:54:15 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:10.072 07:54:15 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:10.072 07:54:15 -- common/autotest_common.sh@10 -- # set +x 00:08:10.072 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:10.072 ************************************ 00:08:10.072 START TEST dd_flags_misc_forced_aio 00:08:10.072 ************************************ 00:08:10.072 07:54:15 -- common/autotest_common.sh@1104 -- # io 00:08:10.072 07:54:15 -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:10.072 07:54:15 -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:10.072 07:54:15 -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:10.072 07:54:15 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:10.072 07:54:15 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:10.072 07:54:15 -- dd/common.sh@98 -- # xtrace_disable 00:08:10.072 07:54:15 -- common/autotest_common.sh@10 -- # set +x 00:08:10.332 07:54:15 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:10.332 07:54:15 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:10.332 [2024-07-13 07:54:15.929042] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:10.332 [2024-07-13 07:54:15.929146] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69407 ] 00:08:10.332 [2024-07-13 07:54:16.058958] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.332 [2024-07-13 07:54:16.088853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.591  Copying: 512/512 [B] (average 500 kBps) 00:08:10.591 00:08:10.591 07:54:16 -- dd/posix.sh@93 -- # [[ yhklu65bsgj12hg363x7ti5933w34084dg8rdpxw2gxroeoqmqnvf74eztz5ahfyga8ta2ajgczmi2hsfqd1hqnk6qlxoq1azb51msm30l83ipvxcr9q0fa16mb45tguq10vfiwqnvgtkie56pd393csiirucrtidovj9atx1alxk9u5lu9wh5w4896xn9nqk4oh6vkis1af0j0apjvo1xqp891agkkv049deyir0l95slmtjz1x8dl1xash65az29tqx4nkpjdr5uerx7aov4eh659y7b1jq7cnvvf1du7asvgyo8o4tddhe2o64q7f99oyq0k81asey9flza2iaq4eto0p0evbew2e2298kg04jw7x447b9qjkyf0urrwba46jrfnwc4i4v9wif9y0pfju7kvejfef3iic4lfkl8s0qs3irynfvq741dd1jxk1tag8vafxur8hwlmygd8q0zxqwzbh9odd9rb7yv09phxrf1dajnxtmvtho6x1gouu == \y\h\k\l\u\6\5\b\s\g\j\1\2\h\g\3\6\3\x\7\t\i\5\9\3\3\w\3\4\0\8\4\d\g\8\r\d\p\x\w\2\g\x\r\o\e\o\q\m\q\n\v\f\7\4\e\z\t\z\5\a\h\f\y\g\a\8\t\a\2\a\j\g\c\z\m\i\2\h\s\f\q\d\1\h\q\n\k\6\q\l\x\o\q\1\a\z\b\5\1\m\s\m\3\0\l\8\3\i\p\v\x\c\r\9\q\0\f\a\1\6\m\b\4\5\t\g\u\q\1\0\v\f\i\w\q\n\v\g\t\k\i\e\5\6\p\d\3\9\3\c\s\i\i\r\u\c\r\t\i\d\o\v\j\9\a\t\x\1\a\l\x\k\9\u\5\l\u\9\w\h\5\w\4\8\9\6\x\n\9\n\q\k\4\o\h\6\v\k\i\s\1\a\f\0\j\0\a\p\j\v\o\1\x\q\p\8\9\1\a\g\k\k\v\0\4\9\d\e\y\i\r\0\l\9\5\s\l\m\t\j\z\1\x\8\d\l\1\x\a\s\h\6\5\a\z\2\9\t\q\x\4\n\k\p\j\d\r\5\u\e\r\x\7\a\o\v\4\e\h\6\5\9\y\7\b\1\j\q\7\c\n\v\v\f\1\d\u\7\a\s\v\g\y\o\8\o\4\t\d\d\h\e\2\o\6\4\q\7\f\9\9\o\y\q\0\k\8\1\a\s\e\y\9\f\l\z\a\2\i\a\q\4\e\t\o\0\p\0\e\v\b\e\w\2\e\2\2\9\8\k\g\0\4\j\w\7\x\4\4\7\b\9\q\j\k\y\f\0\u\r\r\w\b\a\4\6\j\r\f\n\w\c\4\i\4\v\9\w\i\f\9\y\0\p\f\j\u\7\k\v\e\j\f\e\f\3\i\i\c\4\l\f\k\l\8\s\0\q\s\3\i\r\y\n\f\v\q\7\4\1\d\d\1\j\x\k\1\t\a\g\8\v\a\f\x\u\r\8\h\w\l\m\y\g\d\8\q\0\z\x\q\w\z\b\h\9\o\d\d\9\r\b\7\y\v\0\9\p\h\x\r\f\1\d\a\j\n\x\t\m\v\t\h\o\6\x\1\g\o\u\u ]] 00:08:10.591 07:54:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:10.591 07:54:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:10.591 [2024-07-13 07:54:16.320354] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:10.591 [2024-07-13 07:54:16.320442] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69409 ] 00:08:10.850 [2024-07-13 07:54:16.455602] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.850 [2024-07-13 07:54:16.485646] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.110  Copying: 512/512 [B] (average 500 kBps) 00:08:11.110 00:08:11.110 07:54:16 -- dd/posix.sh@93 -- # [[ yhklu65bsgj12hg363x7ti5933w34084dg8rdpxw2gxroeoqmqnvf74eztz5ahfyga8ta2ajgczmi2hsfqd1hqnk6qlxoq1azb51msm30l83ipvxcr9q0fa16mb45tguq10vfiwqnvgtkie56pd393csiirucrtidovj9atx1alxk9u5lu9wh5w4896xn9nqk4oh6vkis1af0j0apjvo1xqp891agkkv049deyir0l95slmtjz1x8dl1xash65az29tqx4nkpjdr5uerx7aov4eh659y7b1jq7cnvvf1du7asvgyo8o4tddhe2o64q7f99oyq0k81asey9flza2iaq4eto0p0evbew2e2298kg04jw7x447b9qjkyf0urrwba46jrfnwc4i4v9wif9y0pfju7kvejfef3iic4lfkl8s0qs3irynfvq741dd1jxk1tag8vafxur8hwlmygd8q0zxqwzbh9odd9rb7yv09phxrf1dajnxtmvtho6x1gouu == \y\h\k\l\u\6\5\b\s\g\j\1\2\h\g\3\6\3\x\7\t\i\5\9\3\3\w\3\4\0\8\4\d\g\8\r\d\p\x\w\2\g\x\r\o\e\o\q\m\q\n\v\f\7\4\e\z\t\z\5\a\h\f\y\g\a\8\t\a\2\a\j\g\c\z\m\i\2\h\s\f\q\d\1\h\q\n\k\6\q\l\x\o\q\1\a\z\b\5\1\m\s\m\3\0\l\8\3\i\p\v\x\c\r\9\q\0\f\a\1\6\m\b\4\5\t\g\u\q\1\0\v\f\i\w\q\n\v\g\t\k\i\e\5\6\p\d\3\9\3\c\s\i\i\r\u\c\r\t\i\d\o\v\j\9\a\t\x\1\a\l\x\k\9\u\5\l\u\9\w\h\5\w\4\8\9\6\x\n\9\n\q\k\4\o\h\6\v\k\i\s\1\a\f\0\j\0\a\p\j\v\o\1\x\q\p\8\9\1\a\g\k\k\v\0\4\9\d\e\y\i\r\0\l\9\5\s\l\m\t\j\z\1\x\8\d\l\1\x\a\s\h\6\5\a\z\2\9\t\q\x\4\n\k\p\j\d\r\5\u\e\r\x\7\a\o\v\4\e\h\6\5\9\y\7\b\1\j\q\7\c\n\v\v\f\1\d\u\7\a\s\v\g\y\o\8\o\4\t\d\d\h\e\2\o\6\4\q\7\f\9\9\o\y\q\0\k\8\1\a\s\e\y\9\f\l\z\a\2\i\a\q\4\e\t\o\0\p\0\e\v\b\e\w\2\e\2\2\9\8\k\g\0\4\j\w\7\x\4\4\7\b\9\q\j\k\y\f\0\u\r\r\w\b\a\4\6\j\r\f\n\w\c\4\i\4\v\9\w\i\f\9\y\0\p\f\j\u\7\k\v\e\j\f\e\f\3\i\i\c\4\l\f\k\l\8\s\0\q\s\3\i\r\y\n\f\v\q\7\4\1\d\d\1\j\x\k\1\t\a\g\8\v\a\f\x\u\r\8\h\w\l\m\y\g\d\8\q\0\z\x\q\w\z\b\h\9\o\d\d\9\r\b\7\y\v\0\9\p\h\x\r\f\1\d\a\j\n\x\t\m\v\t\h\o\6\x\1\g\o\u\u ]] 00:08:11.110 07:54:16 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:11.110 07:54:16 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:11.110 [2024-07-13 07:54:16.737531] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:11.110 [2024-07-13 07:54:16.737624] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69416 ] 00:08:11.110 [2024-07-13 07:54:16.874323] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.110 [2024-07-13 07:54:16.904631] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.369  Copying: 512/512 [B] (average 166 kBps) 00:08:11.369 00:08:11.369 07:54:17 -- dd/posix.sh@93 -- # [[ yhklu65bsgj12hg363x7ti5933w34084dg8rdpxw2gxroeoqmqnvf74eztz5ahfyga8ta2ajgczmi2hsfqd1hqnk6qlxoq1azb51msm30l83ipvxcr9q0fa16mb45tguq10vfiwqnvgtkie56pd393csiirucrtidovj9atx1alxk9u5lu9wh5w4896xn9nqk4oh6vkis1af0j0apjvo1xqp891agkkv049deyir0l95slmtjz1x8dl1xash65az29tqx4nkpjdr5uerx7aov4eh659y7b1jq7cnvvf1du7asvgyo8o4tddhe2o64q7f99oyq0k81asey9flza2iaq4eto0p0evbew2e2298kg04jw7x447b9qjkyf0urrwba46jrfnwc4i4v9wif9y0pfju7kvejfef3iic4lfkl8s0qs3irynfvq741dd1jxk1tag8vafxur8hwlmygd8q0zxqwzbh9odd9rb7yv09phxrf1dajnxtmvtho6x1gouu == \y\h\k\l\u\6\5\b\s\g\j\1\2\h\g\3\6\3\x\7\t\i\5\9\3\3\w\3\4\0\8\4\d\g\8\r\d\p\x\w\2\g\x\r\o\e\o\q\m\q\n\v\f\7\4\e\z\t\z\5\a\h\f\y\g\a\8\t\a\2\a\j\g\c\z\m\i\2\h\s\f\q\d\1\h\q\n\k\6\q\l\x\o\q\1\a\z\b\5\1\m\s\m\3\0\l\8\3\i\p\v\x\c\r\9\q\0\f\a\1\6\m\b\4\5\t\g\u\q\1\0\v\f\i\w\q\n\v\g\t\k\i\e\5\6\p\d\3\9\3\c\s\i\i\r\u\c\r\t\i\d\o\v\j\9\a\t\x\1\a\l\x\k\9\u\5\l\u\9\w\h\5\w\4\8\9\6\x\n\9\n\q\k\4\o\h\6\v\k\i\s\1\a\f\0\j\0\a\p\j\v\o\1\x\q\p\8\9\1\a\g\k\k\v\0\4\9\d\e\y\i\r\0\l\9\5\s\l\m\t\j\z\1\x\8\d\l\1\x\a\s\h\6\5\a\z\2\9\t\q\x\4\n\k\p\j\d\r\5\u\e\r\x\7\a\o\v\4\e\h\6\5\9\y\7\b\1\j\q\7\c\n\v\v\f\1\d\u\7\a\s\v\g\y\o\8\o\4\t\d\d\h\e\2\o\6\4\q\7\f\9\9\o\y\q\0\k\8\1\a\s\e\y\9\f\l\z\a\2\i\a\q\4\e\t\o\0\p\0\e\v\b\e\w\2\e\2\2\9\8\k\g\0\4\j\w\7\x\4\4\7\b\9\q\j\k\y\f\0\u\r\r\w\b\a\4\6\j\r\f\n\w\c\4\i\4\v\9\w\i\f\9\y\0\p\f\j\u\7\k\v\e\j\f\e\f\3\i\i\c\4\l\f\k\l\8\s\0\q\s\3\i\r\y\n\f\v\q\7\4\1\d\d\1\j\x\k\1\t\a\g\8\v\a\f\x\u\r\8\h\w\l\m\y\g\d\8\q\0\z\x\q\w\z\b\h\9\o\d\d\9\r\b\7\y\v\0\9\p\h\x\r\f\1\d\a\j\n\x\t\m\v\t\h\o\6\x\1\g\o\u\u ]] 00:08:11.369 07:54:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:11.369 07:54:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:11.369 [2024-07-13 07:54:17.155937] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:11.369 [2024-07-13 07:54:17.156031] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69418 ] 00:08:11.629 [2024-07-13 07:54:17.294419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.629 [2024-07-13 07:54:17.329870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.888  Copying: 512/512 [B] (average 500 kBps) 00:08:11.888 00:08:11.888 07:54:17 -- dd/posix.sh@93 -- # [[ yhklu65bsgj12hg363x7ti5933w34084dg8rdpxw2gxroeoqmqnvf74eztz5ahfyga8ta2ajgczmi2hsfqd1hqnk6qlxoq1azb51msm30l83ipvxcr9q0fa16mb45tguq10vfiwqnvgtkie56pd393csiirucrtidovj9atx1alxk9u5lu9wh5w4896xn9nqk4oh6vkis1af0j0apjvo1xqp891agkkv049deyir0l95slmtjz1x8dl1xash65az29tqx4nkpjdr5uerx7aov4eh659y7b1jq7cnvvf1du7asvgyo8o4tddhe2o64q7f99oyq0k81asey9flza2iaq4eto0p0evbew2e2298kg04jw7x447b9qjkyf0urrwba46jrfnwc4i4v9wif9y0pfju7kvejfef3iic4lfkl8s0qs3irynfvq741dd1jxk1tag8vafxur8hwlmygd8q0zxqwzbh9odd9rb7yv09phxrf1dajnxtmvtho6x1gouu == \y\h\k\l\u\6\5\b\s\g\j\1\2\h\g\3\6\3\x\7\t\i\5\9\3\3\w\3\4\0\8\4\d\g\8\r\d\p\x\w\2\g\x\r\o\e\o\q\m\q\n\v\f\7\4\e\z\t\z\5\a\h\f\y\g\a\8\t\a\2\a\j\g\c\z\m\i\2\h\s\f\q\d\1\h\q\n\k\6\q\l\x\o\q\1\a\z\b\5\1\m\s\m\3\0\l\8\3\i\p\v\x\c\r\9\q\0\f\a\1\6\m\b\4\5\t\g\u\q\1\0\v\f\i\w\q\n\v\g\t\k\i\e\5\6\p\d\3\9\3\c\s\i\i\r\u\c\r\t\i\d\o\v\j\9\a\t\x\1\a\l\x\k\9\u\5\l\u\9\w\h\5\w\4\8\9\6\x\n\9\n\q\k\4\o\h\6\v\k\i\s\1\a\f\0\j\0\a\p\j\v\o\1\x\q\p\8\9\1\a\g\k\k\v\0\4\9\d\e\y\i\r\0\l\9\5\s\l\m\t\j\z\1\x\8\d\l\1\x\a\s\h\6\5\a\z\2\9\t\q\x\4\n\k\p\j\d\r\5\u\e\r\x\7\a\o\v\4\e\h\6\5\9\y\7\b\1\j\q\7\c\n\v\v\f\1\d\u\7\a\s\v\g\y\o\8\o\4\t\d\d\h\e\2\o\6\4\q\7\f\9\9\o\y\q\0\k\8\1\a\s\e\y\9\f\l\z\a\2\i\a\q\4\e\t\o\0\p\0\e\v\b\e\w\2\e\2\2\9\8\k\g\0\4\j\w\7\x\4\4\7\b\9\q\j\k\y\f\0\u\r\r\w\b\a\4\6\j\r\f\n\w\c\4\i\4\v\9\w\i\f\9\y\0\p\f\j\u\7\k\v\e\j\f\e\f\3\i\i\c\4\l\f\k\l\8\s\0\q\s\3\i\r\y\n\f\v\q\7\4\1\d\d\1\j\x\k\1\t\a\g\8\v\a\f\x\u\r\8\h\w\l\m\y\g\d\8\q\0\z\x\q\w\z\b\h\9\o\d\d\9\r\b\7\y\v\0\9\p\h\x\r\f\1\d\a\j\n\x\t\m\v\t\h\o\6\x\1\g\o\u\u ]] 00:08:11.888 07:54:17 -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:11.888 07:54:17 -- dd/posix.sh@86 -- # gen_bytes 512 00:08:11.888 07:54:17 -- dd/common.sh@98 -- # xtrace_disable 00:08:11.888 07:54:17 -- common/autotest_common.sh@10 -- # set +x 00:08:11.888 07:54:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:11.888 07:54:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:11.888 [2024-07-13 07:54:17.577284] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:11.888 [2024-07-13 07:54:17.577378] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69420 ] 00:08:12.147 [2024-07-13 07:54:17.714516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.147 [2024-07-13 07:54:17.744307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.147  Copying: 512/512 [B] (average 500 kBps) 00:08:12.147 00:08:12.147 07:54:17 -- dd/posix.sh@93 -- # [[ fc832axs16i92fm0ynn15xafytaw7clagsddh3oyx62g1tbows8lvg8bqd1oim656l9jz3pa7weec9tudmeavko270xtyl8pedgv3kb9xrm0h9fq34u74t1cce2ahlfuhyh1eqy76uopqiain9nzgbb0uwehhqgpmu9v4m0hypxjwv9g7nxxjipt49j394jfsrdjslylxnfajjm47whqxcfumh989426trv0cuzx0i8uv7sn3fplkmd6x71sok93rj5ciszdjt8i5y2f737kl5qo0ys2pnacazsywyaga3tpdxna3vhmjjctm15jwwiesjqq7ihcqhxmysi2giqf1eobzeddo7t78jmr9uwir0frhtfhy57stbk733vizi19v2am7lunm3dzoikbv2k4swrrdhchkazjaxhk6fqyhr2bd4mnboql1dkjgr94eu70svrle3kmgiquof7ygztcdoax72tujublpevep3px2f8ajx01hftehwzvcrcv4ba9 == \f\c\8\3\2\a\x\s\1\6\i\9\2\f\m\0\y\n\n\1\5\x\a\f\y\t\a\w\7\c\l\a\g\s\d\d\h\3\o\y\x\6\2\g\1\t\b\o\w\s\8\l\v\g\8\b\q\d\1\o\i\m\6\5\6\l\9\j\z\3\p\a\7\w\e\e\c\9\t\u\d\m\e\a\v\k\o\2\7\0\x\t\y\l\8\p\e\d\g\v\3\k\b\9\x\r\m\0\h\9\f\q\3\4\u\7\4\t\1\c\c\e\2\a\h\l\f\u\h\y\h\1\e\q\y\7\6\u\o\p\q\i\a\i\n\9\n\z\g\b\b\0\u\w\e\h\h\q\g\p\m\u\9\v\4\m\0\h\y\p\x\j\w\v\9\g\7\n\x\x\j\i\p\t\4\9\j\3\9\4\j\f\s\r\d\j\s\l\y\l\x\n\f\a\j\j\m\4\7\w\h\q\x\c\f\u\m\h\9\8\9\4\2\6\t\r\v\0\c\u\z\x\0\i\8\u\v\7\s\n\3\f\p\l\k\m\d\6\x\7\1\s\o\k\9\3\r\j\5\c\i\s\z\d\j\t\8\i\5\y\2\f\7\3\7\k\l\5\q\o\0\y\s\2\p\n\a\c\a\z\s\y\w\y\a\g\a\3\t\p\d\x\n\a\3\v\h\m\j\j\c\t\m\1\5\j\w\w\i\e\s\j\q\q\7\i\h\c\q\h\x\m\y\s\i\2\g\i\q\f\1\e\o\b\z\e\d\d\o\7\t\7\8\j\m\r\9\u\w\i\r\0\f\r\h\t\f\h\y\5\7\s\t\b\k\7\3\3\v\i\z\i\1\9\v\2\a\m\7\l\u\n\m\3\d\z\o\i\k\b\v\2\k\4\s\w\r\r\d\h\c\h\k\a\z\j\a\x\h\k\6\f\q\y\h\r\2\b\d\4\m\n\b\o\q\l\1\d\k\j\g\r\9\4\e\u\7\0\s\v\r\l\e\3\k\m\g\i\q\u\o\f\7\y\g\z\t\c\d\o\a\x\7\2\t\u\j\u\b\l\p\e\v\e\p\3\p\x\2\f\8\a\j\x\0\1\h\f\t\e\h\w\z\v\c\r\c\v\4\b\a\9 ]] 00:08:12.147 07:54:17 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:12.147 07:54:17 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:12.406 [2024-07-13 07:54:17.972681] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:12.407 [2024-07-13 07:54:17.972789] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69427 ] 00:08:12.407 [2024-07-13 07:54:18.110315] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.407 [2024-07-13 07:54:18.139557] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.666  Copying: 512/512 [B] (average 500 kBps) 00:08:12.666 00:08:12.666 07:54:18 -- dd/posix.sh@93 -- # [[ fc832axs16i92fm0ynn15xafytaw7clagsddh3oyx62g1tbows8lvg8bqd1oim656l9jz3pa7weec9tudmeavko270xtyl8pedgv3kb9xrm0h9fq34u74t1cce2ahlfuhyh1eqy76uopqiain9nzgbb0uwehhqgpmu9v4m0hypxjwv9g7nxxjipt49j394jfsrdjslylxnfajjm47whqxcfumh989426trv0cuzx0i8uv7sn3fplkmd6x71sok93rj5ciszdjt8i5y2f737kl5qo0ys2pnacazsywyaga3tpdxna3vhmjjctm15jwwiesjqq7ihcqhxmysi2giqf1eobzeddo7t78jmr9uwir0frhtfhy57stbk733vizi19v2am7lunm3dzoikbv2k4swrrdhchkazjaxhk6fqyhr2bd4mnboql1dkjgr94eu70svrle3kmgiquof7ygztcdoax72tujublpevep3px2f8ajx01hftehwzvcrcv4ba9 == \f\c\8\3\2\a\x\s\1\6\i\9\2\f\m\0\y\n\n\1\5\x\a\f\y\t\a\w\7\c\l\a\g\s\d\d\h\3\o\y\x\6\2\g\1\t\b\o\w\s\8\l\v\g\8\b\q\d\1\o\i\m\6\5\6\l\9\j\z\3\p\a\7\w\e\e\c\9\t\u\d\m\e\a\v\k\o\2\7\0\x\t\y\l\8\p\e\d\g\v\3\k\b\9\x\r\m\0\h\9\f\q\3\4\u\7\4\t\1\c\c\e\2\a\h\l\f\u\h\y\h\1\e\q\y\7\6\u\o\p\q\i\a\i\n\9\n\z\g\b\b\0\u\w\e\h\h\q\g\p\m\u\9\v\4\m\0\h\y\p\x\j\w\v\9\g\7\n\x\x\j\i\p\t\4\9\j\3\9\4\j\f\s\r\d\j\s\l\y\l\x\n\f\a\j\j\m\4\7\w\h\q\x\c\f\u\m\h\9\8\9\4\2\6\t\r\v\0\c\u\z\x\0\i\8\u\v\7\s\n\3\f\p\l\k\m\d\6\x\7\1\s\o\k\9\3\r\j\5\c\i\s\z\d\j\t\8\i\5\y\2\f\7\3\7\k\l\5\q\o\0\y\s\2\p\n\a\c\a\z\s\y\w\y\a\g\a\3\t\p\d\x\n\a\3\v\h\m\j\j\c\t\m\1\5\j\w\w\i\e\s\j\q\q\7\i\h\c\q\h\x\m\y\s\i\2\g\i\q\f\1\e\o\b\z\e\d\d\o\7\t\7\8\j\m\r\9\u\w\i\r\0\f\r\h\t\f\h\y\5\7\s\t\b\k\7\3\3\v\i\z\i\1\9\v\2\a\m\7\l\u\n\m\3\d\z\o\i\k\b\v\2\k\4\s\w\r\r\d\h\c\h\k\a\z\j\a\x\h\k\6\f\q\y\h\r\2\b\d\4\m\n\b\o\q\l\1\d\k\j\g\r\9\4\e\u\7\0\s\v\r\l\e\3\k\m\g\i\q\u\o\f\7\y\g\z\t\c\d\o\a\x\7\2\t\u\j\u\b\l\p\e\v\e\p\3\p\x\2\f\8\a\j\x\0\1\h\f\t\e\h\w\z\v\c\r\c\v\4\b\a\9 ]] 00:08:12.666 07:54:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:12.666 07:54:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:12.666 [2024-07-13 07:54:18.361377] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:12.666 [2024-07-13 07:54:18.361471] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69429 ] 00:08:12.925 [2024-07-13 07:54:18.499310] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.925 [2024-07-13 07:54:18.534832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.925  Copying: 512/512 [B] (average 500 kBps) 00:08:12.925 00:08:12.925 07:54:18 -- dd/posix.sh@93 -- # [[ fc832axs16i92fm0ynn15xafytaw7clagsddh3oyx62g1tbows8lvg8bqd1oim656l9jz3pa7weec9tudmeavko270xtyl8pedgv3kb9xrm0h9fq34u74t1cce2ahlfuhyh1eqy76uopqiain9nzgbb0uwehhqgpmu9v4m0hypxjwv9g7nxxjipt49j394jfsrdjslylxnfajjm47whqxcfumh989426trv0cuzx0i8uv7sn3fplkmd6x71sok93rj5ciszdjt8i5y2f737kl5qo0ys2pnacazsywyaga3tpdxna3vhmjjctm15jwwiesjqq7ihcqhxmysi2giqf1eobzeddo7t78jmr9uwir0frhtfhy57stbk733vizi19v2am7lunm3dzoikbv2k4swrrdhchkazjaxhk6fqyhr2bd4mnboql1dkjgr94eu70svrle3kmgiquof7ygztcdoax72tujublpevep3px2f8ajx01hftehwzvcrcv4ba9 == \f\c\8\3\2\a\x\s\1\6\i\9\2\f\m\0\y\n\n\1\5\x\a\f\y\t\a\w\7\c\l\a\g\s\d\d\h\3\o\y\x\6\2\g\1\t\b\o\w\s\8\l\v\g\8\b\q\d\1\o\i\m\6\5\6\l\9\j\z\3\p\a\7\w\e\e\c\9\t\u\d\m\e\a\v\k\o\2\7\0\x\t\y\l\8\p\e\d\g\v\3\k\b\9\x\r\m\0\h\9\f\q\3\4\u\7\4\t\1\c\c\e\2\a\h\l\f\u\h\y\h\1\e\q\y\7\6\u\o\p\q\i\a\i\n\9\n\z\g\b\b\0\u\w\e\h\h\q\g\p\m\u\9\v\4\m\0\h\y\p\x\j\w\v\9\g\7\n\x\x\j\i\p\t\4\9\j\3\9\4\j\f\s\r\d\j\s\l\y\l\x\n\f\a\j\j\m\4\7\w\h\q\x\c\f\u\m\h\9\8\9\4\2\6\t\r\v\0\c\u\z\x\0\i\8\u\v\7\s\n\3\f\p\l\k\m\d\6\x\7\1\s\o\k\9\3\r\j\5\c\i\s\z\d\j\t\8\i\5\y\2\f\7\3\7\k\l\5\q\o\0\y\s\2\p\n\a\c\a\z\s\y\w\y\a\g\a\3\t\p\d\x\n\a\3\v\h\m\j\j\c\t\m\1\5\j\w\w\i\e\s\j\q\q\7\i\h\c\q\h\x\m\y\s\i\2\g\i\q\f\1\e\o\b\z\e\d\d\o\7\t\7\8\j\m\r\9\u\w\i\r\0\f\r\h\t\f\h\y\5\7\s\t\b\k\7\3\3\v\i\z\i\1\9\v\2\a\m\7\l\u\n\m\3\d\z\o\i\k\b\v\2\k\4\s\w\r\r\d\h\c\h\k\a\z\j\a\x\h\k\6\f\q\y\h\r\2\b\d\4\m\n\b\o\q\l\1\d\k\j\g\r\9\4\e\u\7\0\s\v\r\l\e\3\k\m\g\i\q\u\o\f\7\y\g\z\t\c\d\o\a\x\7\2\t\u\j\u\b\l\p\e\v\e\p\3\p\x\2\f\8\a\j\x\0\1\h\f\t\e\h\w\z\v\c\r\c\v\4\b\a\9 ]] 00:08:12.925 07:54:18 -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:12.925 07:54:18 -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:13.184 [2024-07-13 07:54:18.775207] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:13.184 [2024-07-13 07:54:18.775340] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69436 ] 00:08:13.184 [2024-07-13 07:54:18.913761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.184 [2024-07-13 07:54:18.944603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.445  Copying: 512/512 [B] (average 500 kBps) 00:08:13.445 00:08:13.445 07:54:19 -- dd/posix.sh@93 -- # [[ fc832axs16i92fm0ynn15xafytaw7clagsddh3oyx62g1tbows8lvg8bqd1oim656l9jz3pa7weec9tudmeavko270xtyl8pedgv3kb9xrm0h9fq34u74t1cce2ahlfuhyh1eqy76uopqiain9nzgbb0uwehhqgpmu9v4m0hypxjwv9g7nxxjipt49j394jfsrdjslylxnfajjm47whqxcfumh989426trv0cuzx0i8uv7sn3fplkmd6x71sok93rj5ciszdjt8i5y2f737kl5qo0ys2pnacazsywyaga3tpdxna3vhmjjctm15jwwiesjqq7ihcqhxmysi2giqf1eobzeddo7t78jmr9uwir0frhtfhy57stbk733vizi19v2am7lunm3dzoikbv2k4swrrdhchkazjaxhk6fqyhr2bd4mnboql1dkjgr94eu70svrle3kmgiquof7ygztcdoax72tujublpevep3px2f8ajx01hftehwzvcrcv4ba9 == \f\c\8\3\2\a\x\s\1\6\i\9\2\f\m\0\y\n\n\1\5\x\a\f\y\t\a\w\7\c\l\a\g\s\d\d\h\3\o\y\x\6\2\g\1\t\b\o\w\s\8\l\v\g\8\b\q\d\1\o\i\m\6\5\6\l\9\j\z\3\p\a\7\w\e\e\c\9\t\u\d\m\e\a\v\k\o\2\7\0\x\t\y\l\8\p\e\d\g\v\3\k\b\9\x\r\m\0\h\9\f\q\3\4\u\7\4\t\1\c\c\e\2\a\h\l\f\u\h\y\h\1\e\q\y\7\6\u\o\p\q\i\a\i\n\9\n\z\g\b\b\0\u\w\e\h\h\q\g\p\m\u\9\v\4\m\0\h\y\p\x\j\w\v\9\g\7\n\x\x\j\i\p\t\4\9\j\3\9\4\j\f\s\r\d\j\s\l\y\l\x\n\f\a\j\j\m\4\7\w\h\q\x\c\f\u\m\h\9\8\9\4\2\6\t\r\v\0\c\u\z\x\0\i\8\u\v\7\s\n\3\f\p\l\k\m\d\6\x\7\1\s\o\k\9\3\r\j\5\c\i\s\z\d\j\t\8\i\5\y\2\f\7\3\7\k\l\5\q\o\0\y\s\2\p\n\a\c\a\z\s\y\w\y\a\g\a\3\t\p\d\x\n\a\3\v\h\m\j\j\c\t\m\1\5\j\w\w\i\e\s\j\q\q\7\i\h\c\q\h\x\m\y\s\i\2\g\i\q\f\1\e\o\b\z\e\d\d\o\7\t\7\8\j\m\r\9\u\w\i\r\0\f\r\h\t\f\h\y\5\7\s\t\b\k\7\3\3\v\i\z\i\1\9\v\2\a\m\7\l\u\n\m\3\d\z\o\i\k\b\v\2\k\4\s\w\r\r\d\h\c\h\k\a\z\j\a\x\h\k\6\f\q\y\h\r\2\b\d\4\m\n\b\o\q\l\1\d\k\j\g\r\9\4\e\u\7\0\s\v\r\l\e\3\k\m\g\i\q\u\o\f\7\y\g\z\t\c\d\o\a\x\7\2\t\u\j\u\b\l\p\e\v\e\p\3\p\x\2\f\8\a\j\x\0\1\h\f\t\e\h\w\z\v\c\r\c\v\4\b\a\9 ]] 00:08:13.445 00:08:13.445 real 0m3.275s 00:08:13.445 user 0m1.599s 00:08:13.445 sys 0m0.695s 00:08:13.445 07:54:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.445 07:54:19 -- common/autotest_common.sh@10 -- # set +x 00:08:13.445 ************************************ 00:08:13.445 END TEST dd_flags_misc_forced_aio 00:08:13.445 ************************************ 00:08:13.445 07:54:19 -- dd/posix.sh@1 -- # cleanup 00:08:13.445 07:54:19 -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:13.445 07:54:19 -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:13.445 00:08:13.445 real 0m15.780s 00:08:13.445 user 0m6.694s 00:08:13.445 sys 0m3.262s 00:08:13.445 07:54:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:13.445 ************************************ 00:08:13.445 END TEST spdk_dd_posix 00:08:13.445 ************************************ 00:08:13.445 07:54:19 -- common/autotest_common.sh@10 -- # set +x 00:08:13.445 07:54:19 -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:13.445 07:54:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:13.445 07:54:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.445 07:54:19 -- common/autotest_common.sh@10 -- # set +x 00:08:13.445 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:13.445 ************************************ 00:08:13.445 START TEST spdk_dd_malloc 00:08:13.445 ************************************ 00:08:13.445 07:54:19 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:13.705 * Looking for test storage... 00:08:13.705 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:13.705 07:54:19 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:13.705 07:54:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:13.705 07:54:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:13.705 07:54:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:13.705 07:54:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.706 07:54:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.706 07:54:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.706 07:54:19 -- paths/export.sh@5 -- # export PATH 00:08:13.706 07:54:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:13.706 07:54:19 -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:13.706 07:54:19 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:13.706 07:54:19 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:13.706 07:54:19 -- common/autotest_common.sh@10 -- # set +x 00:08:13.706 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:13.706 ************************************ 00:08:13.706 START TEST dd_malloc_copy 00:08:13.706 ************************************ 00:08:13.706 07:54:19 -- common/autotest_common.sh@1104 -- # malloc_copy 00:08:13.706 07:54:19 -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:13.706 07:54:19 -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:13.706 07:54:19 -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:13.706 07:54:19 -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:13.706 07:54:19 -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:13.706 07:54:19 -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:13.706 07:54:19 -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:13.706 07:54:19 -- dd/malloc.sh@28 -- # gen_conf 00:08:13.706 07:54:19 -- dd/common.sh@31 -- # xtrace_disable 00:08:13.706 07:54:19 -- common/autotest_common.sh@10 -- # set +x 00:08:13.706 [2024-07-13 07:54:19.390895] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:13.706 [2024-07-13 07:54:19.391356] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69498 ] 00:08:13.706 { 00:08:13.706 "subsystems": [ 00:08:13.706 { 00:08:13.706 "subsystem": "bdev", 00:08:13.706 "config": [ 00:08:13.706 { 00:08:13.706 "params": { 00:08:13.706 "block_size": 512, 00:08:13.706 "num_blocks": 1048576, 00:08:13.706 "name": "malloc0" 00:08:13.706 }, 00:08:13.706 "method": "bdev_malloc_create" 00:08:13.706 }, 00:08:13.706 { 00:08:13.706 "params": { 00:08:13.706 "block_size": 512, 00:08:13.706 "num_blocks": 1048576, 00:08:13.706 "name": "malloc1" 00:08:13.706 }, 00:08:13.706 "method": "bdev_malloc_create" 00:08:13.706 }, 00:08:13.706 { 00:08:13.706 "method": "bdev_wait_for_examine" 00:08:13.706 } 00:08:13.706 ] 00:08:13.706 } 00:08:13.706 ] 00:08:13.706 } 00:08:13.965 [2024-07-13 07:54:19.523796] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.965 [2024-07-13 07:54:19.554571] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.538  Copying: 233/512 [MB] (233 MBps) Copying: 468/512 [MB] (234 MBps) Copying: 512/512 [MB] (average 233 MBps) 00:08:16.538 00:08:16.538 07:54:22 -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:16.538 07:54:22 -- dd/malloc.sh@33 -- # gen_conf 00:08:16.538 07:54:22 -- dd/common.sh@31 -- # xtrace_disable 00:08:16.538 07:54:22 -- common/autotest_common.sh@10 -- # set +x 00:08:16.539 [2024-07-13 07:54:22.318243] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:16.539 [2024-07-13 07:54:22.318344] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69522 ] 00:08:16.539 { 00:08:16.539 "subsystems": [ 00:08:16.539 { 00:08:16.539 "subsystem": "bdev", 00:08:16.539 "config": [ 00:08:16.539 { 00:08:16.539 "params": { 00:08:16.539 "block_size": 512, 00:08:16.539 "num_blocks": 1048576, 00:08:16.539 "name": "malloc0" 00:08:16.539 }, 00:08:16.539 "method": "bdev_malloc_create" 00:08:16.539 }, 00:08:16.539 { 00:08:16.539 "params": { 00:08:16.539 "block_size": 512, 00:08:16.539 "num_blocks": 1048576, 00:08:16.539 "name": "malloc1" 00:08:16.539 }, 00:08:16.539 "method": "bdev_malloc_create" 00:08:16.539 }, 00:08:16.539 { 00:08:16.539 "method": "bdev_wait_for_examine" 00:08:16.539 } 00:08:16.539 ] 00:08:16.539 } 00:08:16.539 ] 00:08:16.539 } 00:08:16.797 [2024-07-13 07:54:22.455868] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.797 [2024-07-13 07:54:22.487994] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.683  Copying: 229/512 [MB] (229 MBps) Copying: 460/512 [MB] (230 MBps) Copying: 512/512 [MB] (average 230 MBps) 00:08:19.683 00:08:19.683 00:08:19.683 real 0m5.877s 00:08:19.683 user 0m5.262s 00:08:19.683 sys 0m0.469s 00:08:19.683 07:54:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.683 ************************************ 00:08:19.683 END TEST dd_malloc_copy 00:08:19.683 ************************************ 00:08:19.683 07:54:25 -- common/autotest_common.sh@10 -- # set +x 00:08:19.683 00:08:19.683 real 0m6.016s 00:08:19.683 user 0m5.315s 00:08:19.683 sys 0m0.553s 00:08:19.683 07:54:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:19.683 ************************************ 00:08:19.683 END TEST spdk_dd_malloc 00:08:19.683 ************************************ 00:08:19.683 07:54:25 -- common/autotest_common.sh@10 -- # set +x 00:08:19.683 07:54:25 -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:19.683 07:54:25 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:08:19.683 07:54:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.683 07:54:25 -- common/autotest_common.sh@10 -- # set +x 00:08:19.683 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:19.683 ************************************ 00:08:19.683 START TEST spdk_dd_bdev_to_bdev 00:08:19.683 ************************************ 00:08:19.683 07:54:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:06.0 0000:00:07.0 00:08:19.683 * Looking for test storage... 00:08:19.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:19.683 07:54:25 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:19.683 07:54:25 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.683 07:54:25 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.683 07:54:25 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.683 07:54:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.683 07:54:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.683 07:54:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.683 07:54:25 -- paths/export.sh@5 -- # export PATH 00:08:19.683 07:54:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:06.0 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:07.0 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:06.0' ['trtype']='pcie') 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:07.0' ['trtype']='pcie') 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:08:19.683 07:54:25 -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:19.683 07:54:25 -- common/autotest_common.sh@1077 -- # '[' 7 -le 1 ']' 00:08:19.683 07:54:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:19.683 07:54:25 -- common/autotest_common.sh@10 -- # set +x 00:08:19.683 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:19.683 ************************************ 00:08:19.683 START TEST dd_inflate_file 00:08:19.683 ************************************ 00:08:19.683 07:54:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:08:19.683 [2024-07-13 07:54:25.467640] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:19.683 [2024-07-13 07:54:25.467953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69602 ] 00:08:19.941 [2024-07-13 07:54:25.605367] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:19.941 [2024-07-13 07:54:25.636941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.200  Copying: 64/64 [MB] (average 1939 MBps) 00:08:20.200 00:08:20.200 ************************************ 00:08:20.200 END TEST dd_inflate_file 00:08:20.200 ************************************ 00:08:20.200 00:08:20.200 real 0m0.445s 00:08:20.200 user 0m0.213s 00:08:20.200 sys 0m0.115s 00:08:20.200 07:54:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:20.200 07:54:25 -- common/autotest_common.sh@10 -- # set +x 00:08:20.200 07:54:25 -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:08:20.200 07:54:25 -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:08:20.200 07:54:25 -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:20.200 07:54:25 -- common/autotest_common.sh@1077 -- # '[' 6 -le 1 ']' 00:08:20.200 07:54:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:20.200 07:54:25 -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:08:20.200 07:54:25 -- common/autotest_common.sh@10 -- # set +x 00:08:20.200 07:54:25 -- dd/common.sh@31 -- # xtrace_disable 00:08:20.200 07:54:25 -- common/autotest_common.sh@10 -- # set +x 00:08:20.200 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:20.200 ************************************ 00:08:20.200 START TEST dd_copy_to_out_bdev 00:08:20.200 ************************************ 00:08:20.200 07:54:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:08:20.200 [2024-07-13 07:54:25.962954] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:20.200 [2024-07-13 07:54:25.963089] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69628 ] 00:08:20.200 { 00:08:20.200 "subsystems": [ 00:08:20.200 { 00:08:20.200 "subsystem": "bdev", 00:08:20.200 "config": [ 00:08:20.200 { 00:08:20.200 "params": { 00:08:20.200 "trtype": "pcie", 00:08:20.200 "traddr": "0000:00:06.0", 00:08:20.200 "name": "Nvme0" 00:08:20.200 }, 00:08:20.200 "method": "bdev_nvme_attach_controller" 00:08:20.200 }, 00:08:20.200 { 00:08:20.200 "params": { 00:08:20.200 "trtype": "pcie", 00:08:20.200 "traddr": "0000:00:07.0", 00:08:20.200 "name": "Nvme1" 00:08:20.200 }, 00:08:20.200 "method": "bdev_nvme_attach_controller" 00:08:20.200 }, 00:08:20.200 { 00:08:20.200 "method": "bdev_wait_for_examine" 00:08:20.200 } 00:08:20.200 ] 00:08:20.200 } 00:08:20.200 ] 00:08:20.200 } 00:08:20.458 [2024-07-13 07:54:26.093540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.458 [2024-07-13 07:54:26.127763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.093  Copying: 51/64 [MB] (51 MBps) Copying: 64/64 [MB] (average 51 MBps) 00:08:22.093 00:08:22.093 00:08:22.093 real 0m1.835s 00:08:22.093 user 0m1.606s 00:08:22.093 sys 0m0.164s 00:08:22.093 07:54:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.093 07:54:27 -- common/autotest_common.sh@10 -- # set +x 00:08:22.093 ************************************ 00:08:22.093 END TEST dd_copy_to_out_bdev 00:08:22.093 ************************************ 00:08:22.093 07:54:27 -- dd/bdev_to_bdev.sh@113 -- # count=65 00:08:22.093 07:54:27 -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:08:22.093 07:54:27 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:22.093 07:54:27 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:22.093 07:54:27 -- common/autotest_common.sh@10 -- # set +x 00:08:22.093 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:22.093 ************************************ 00:08:22.093 START TEST dd_offset_magic 00:08:22.093 ************************************ 00:08:22.093 07:54:27 -- common/autotest_common.sh@1104 -- # offset_magic 00:08:22.093 07:54:27 -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:08:22.093 07:54:27 -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:08:22.093 07:54:27 -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:08:22.093 07:54:27 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:22.093 07:54:27 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:08:22.093 07:54:27 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:22.093 07:54:27 -- dd/common.sh@31 -- # xtrace_disable 00:08:22.093 07:54:27 -- common/autotest_common.sh@10 -- # set +x 00:08:22.093 [2024-07-13 07:54:27.860286] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:22.093 [2024-07-13 07:54:27.860432] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69660 ] 00:08:22.093 { 00:08:22.093 "subsystems": [ 00:08:22.093 { 00:08:22.093 "subsystem": "bdev", 00:08:22.093 "config": [ 00:08:22.093 { 00:08:22.093 "params": { 00:08:22.093 "trtype": "pcie", 00:08:22.093 "traddr": "0000:00:06.0", 00:08:22.093 "name": "Nvme0" 00:08:22.093 }, 00:08:22.093 "method": "bdev_nvme_attach_controller" 00:08:22.093 }, 00:08:22.093 { 00:08:22.093 "params": { 00:08:22.093 "trtype": "pcie", 00:08:22.093 "traddr": "0000:00:07.0", 00:08:22.093 "name": "Nvme1" 00:08:22.093 }, 00:08:22.093 "method": "bdev_nvme_attach_controller" 00:08:22.093 }, 00:08:22.093 { 00:08:22.093 "method": "bdev_wait_for_examine" 00:08:22.093 } 00:08:22.093 ] 00:08:22.093 } 00:08:22.093 ] 00:08:22.093 } 00:08:22.352 [2024-07-13 07:54:28.001142] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.352 [2024-07-13 07:54:28.039141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.870  Copying: 65/65 [MB] (average 955 MBps) 00:08:22.870 00:08:22.870 07:54:28 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:08:22.870 07:54:28 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:22.870 07:54:28 -- dd/common.sh@31 -- # xtrace_disable 00:08:22.870 07:54:28 -- common/autotest_common.sh@10 -- # set +x 00:08:22.870 [2024-07-13 07:54:28.513593] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:22.870 [2024-07-13 07:54:28.513675] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69674 ] 00:08:22.870 { 00:08:22.870 "subsystems": [ 00:08:22.870 { 00:08:22.870 "subsystem": "bdev", 00:08:22.870 "config": [ 00:08:22.870 { 00:08:22.870 "params": { 00:08:22.870 "trtype": "pcie", 00:08:22.870 "traddr": "0000:00:06.0", 00:08:22.870 "name": "Nvme0" 00:08:22.870 }, 00:08:22.870 "method": "bdev_nvme_attach_controller" 00:08:22.870 }, 00:08:22.870 { 00:08:22.870 "params": { 00:08:22.870 "trtype": "pcie", 00:08:22.870 "traddr": "0000:00:07.0", 00:08:22.870 "name": "Nvme1" 00:08:22.870 }, 00:08:22.870 "method": "bdev_nvme_attach_controller" 00:08:22.870 }, 00:08:22.870 { 00:08:22.870 "method": "bdev_wait_for_examine" 00:08:22.870 } 00:08:22.870 ] 00:08:22.870 } 00:08:22.870 ] 00:08:22.870 } 00:08:22.870 [2024-07-13 07:54:28.644205] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.870 [2024-07-13 07:54:28.675391] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.388  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:23.388 00:08:23.388 07:54:29 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:23.388 07:54:29 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:23.388 07:54:29 -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:08:23.388 07:54:29 -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:08:23.388 07:54:29 -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:08:23.388 07:54:29 -- dd/common.sh@31 -- # xtrace_disable 00:08:23.388 07:54:29 -- common/autotest_common.sh@10 -- # set +x 00:08:23.388 [2024-07-13 07:54:29.052690] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:23.388 [2024-07-13 07:54:29.052795] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69683 ] 00:08:23.388 { 00:08:23.388 "subsystems": [ 00:08:23.388 { 00:08:23.388 "subsystem": "bdev", 00:08:23.388 "config": [ 00:08:23.388 { 00:08:23.388 "params": { 00:08:23.388 "trtype": "pcie", 00:08:23.388 "traddr": "0000:00:06.0", 00:08:23.388 "name": "Nvme0" 00:08:23.388 }, 00:08:23.388 "method": "bdev_nvme_attach_controller" 00:08:23.388 }, 00:08:23.388 { 00:08:23.388 "params": { 00:08:23.388 "trtype": "pcie", 00:08:23.388 "traddr": "0000:00:07.0", 00:08:23.388 "name": "Nvme1" 00:08:23.388 }, 00:08:23.388 "method": "bdev_nvme_attach_controller" 00:08:23.388 }, 00:08:23.388 { 00:08:23.388 "method": "bdev_wait_for_examine" 00:08:23.388 } 00:08:23.388 ] 00:08:23.388 } 00:08:23.388 ] 00:08:23.388 } 00:08:23.388 [2024-07-13 07:54:29.190999] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.647 [2024-07-13 07:54:29.221724] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.905  Copying: 65/65 [MB] (average 1101 MBps) 00:08:23.905 00:08:23.905 07:54:29 -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:08:23.905 07:54:29 -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:08:23.905 07:54:29 -- dd/common.sh@31 -- # xtrace_disable 00:08:23.905 07:54:29 -- common/autotest_common.sh@10 -- # set +x 00:08:23.905 [2024-07-13 07:54:29.680022] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:23.905 [2024-07-13 07:54:29.680112] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69697 ] 00:08:23.905 { 00:08:23.905 "subsystems": [ 00:08:23.905 { 00:08:23.905 "subsystem": "bdev", 00:08:23.905 "config": [ 00:08:23.905 { 00:08:23.905 "params": { 00:08:23.905 "trtype": "pcie", 00:08:23.905 "traddr": "0000:00:06.0", 00:08:23.905 "name": "Nvme0" 00:08:23.905 }, 00:08:23.905 "method": "bdev_nvme_attach_controller" 00:08:23.905 }, 00:08:23.905 { 00:08:23.905 "params": { 00:08:23.905 "trtype": "pcie", 00:08:23.905 "traddr": "0000:00:07.0", 00:08:23.905 "name": "Nvme1" 00:08:23.905 }, 00:08:23.905 "method": "bdev_nvme_attach_controller" 00:08:23.905 }, 00:08:23.905 { 00:08:23.905 "method": "bdev_wait_for_examine" 00:08:23.905 } 00:08:23.905 ] 00:08:23.905 } 00:08:23.905 ] 00:08:23.905 } 00:08:24.165 [2024-07-13 07:54:29.818681] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.165 [2024-07-13 07:54:29.849125] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.424  Copying: 1024/1024 [kB] (average 1000 MBps) 00:08:24.424 00:08:24.424 07:54:30 -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:08:24.424 07:54:30 -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:08:24.424 00:08:24.424 real 0m2.349s 00:08:24.424 user 0m1.705s 00:08:24.424 sys 0m0.455s 00:08:24.424 07:54:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:24.424 07:54:30 -- common/autotest_common.sh@10 -- # set +x 00:08:24.424 ************************************ 00:08:24.424 END TEST dd_offset_magic 00:08:24.424 ************************************ 00:08:24.424 07:54:30 -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:08:24.424 07:54:30 -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:08:24.424 07:54:30 -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:24.424 07:54:30 -- dd/common.sh@11 -- # local nvme_ref= 00:08:24.424 07:54:30 -- dd/common.sh@12 -- # local size=4194330 00:08:24.424 07:54:30 -- dd/common.sh@14 -- # local bs=1048576 00:08:24.424 07:54:30 -- dd/common.sh@15 -- # local count=5 00:08:24.424 07:54:30 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:08:24.424 07:54:30 -- dd/common.sh@18 -- # gen_conf 00:08:24.424 07:54:30 -- dd/common.sh@31 -- # xtrace_disable 00:08:24.424 07:54:30 -- common/autotest_common.sh@10 -- # set +x 00:08:24.683 [2024-07-13 07:54:30.254583] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:24.683 [2024-07-13 07:54:30.254672] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69721 ] 00:08:24.683 { 00:08:24.683 "subsystems": [ 00:08:24.683 { 00:08:24.683 "subsystem": "bdev", 00:08:24.683 "config": [ 00:08:24.684 { 00:08:24.684 "params": { 00:08:24.684 "trtype": "pcie", 00:08:24.684 "traddr": "0000:00:06.0", 00:08:24.684 "name": "Nvme0" 00:08:24.684 }, 00:08:24.684 "method": "bdev_nvme_attach_controller" 00:08:24.684 }, 00:08:24.684 { 00:08:24.684 "params": { 00:08:24.684 "trtype": "pcie", 00:08:24.684 "traddr": "0000:00:07.0", 00:08:24.684 "name": "Nvme1" 00:08:24.684 }, 00:08:24.684 "method": "bdev_nvme_attach_controller" 00:08:24.684 }, 00:08:24.684 { 00:08:24.684 "method": "bdev_wait_for_examine" 00:08:24.684 } 00:08:24.684 ] 00:08:24.684 } 00:08:24.684 ] 00:08:24.684 } 00:08:24.684 [2024-07-13 07:54:30.388697] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.684 [2024-07-13 07:54:30.419603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.943  Copying: 5120/5120 [kB] (average 1250 MBps) 00:08:24.943 00:08:24.943 07:54:30 -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:08:24.943 07:54:30 -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:08:24.943 07:54:30 -- dd/common.sh@11 -- # local nvme_ref= 00:08:24.943 07:54:30 -- dd/common.sh@12 -- # local size=4194330 00:08:24.943 07:54:30 -- dd/common.sh@14 -- # local bs=1048576 00:08:24.943 07:54:30 -- dd/common.sh@15 -- # local count=5 00:08:24.943 07:54:30 -- dd/common.sh@18 -- # gen_conf 00:08:24.943 07:54:30 -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:08:24.943 07:54:30 -- dd/common.sh@31 -- # xtrace_disable 00:08:24.943 07:54:30 -- common/autotest_common.sh@10 -- # set +x 00:08:25.257 [2024-07-13 07:54:30.801335] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:25.258 [2024-07-13 07:54:30.801601] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69735 ] 00:08:25.258 { 00:08:25.258 "subsystems": [ 00:08:25.258 { 00:08:25.258 "subsystem": "bdev", 00:08:25.258 "config": [ 00:08:25.258 { 00:08:25.258 "params": { 00:08:25.258 "trtype": "pcie", 00:08:25.258 "traddr": "0000:00:06.0", 00:08:25.258 "name": "Nvme0" 00:08:25.258 }, 00:08:25.258 "method": "bdev_nvme_attach_controller" 00:08:25.258 }, 00:08:25.258 { 00:08:25.258 "params": { 00:08:25.258 "trtype": "pcie", 00:08:25.258 "traddr": "0000:00:07.0", 00:08:25.258 "name": "Nvme1" 00:08:25.258 }, 00:08:25.258 "method": "bdev_nvme_attach_controller" 00:08:25.258 }, 00:08:25.258 { 00:08:25.258 "method": "bdev_wait_for_examine" 00:08:25.258 } 00:08:25.258 ] 00:08:25.258 } 00:08:25.258 ] 00:08:25.258 } 00:08:25.258 [2024-07-13 07:54:30.939063] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.258 [2024-07-13 07:54:30.970436] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.532  Copying: 5120/5120 [kB] (average 1000 MBps) 00:08:25.532 00:08:25.532 07:54:31 -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:08:25.532 ************************************ 00:08:25.532 END TEST spdk_dd_bdev_to_bdev 00:08:25.532 ************************************ 00:08:25.532 00:08:25.532 real 0m5.995s 00:08:25.532 user 0m4.401s 00:08:25.532 sys 0m1.107s 00:08:25.532 07:54:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.532 07:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:25.792 07:54:31 -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:08:25.792 07:54:31 -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:25.792 07:54:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:25.792 07:54:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.792 07:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:25.792 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:25.792 ************************************ 00:08:25.792 START TEST spdk_dd_uring 00:08:25.792 ************************************ 00:08:25.792 07:54:31 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:08:25.792 * Looking for test storage... 00:08:25.792 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:25.792 07:54:31 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:25.792 07:54:31 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.792 07:54:31 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.792 07:54:31 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.792 07:54:31 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.792 07:54:31 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.792 07:54:31 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.792 07:54:31 -- paths/export.sh@5 -- # export PATH 00:08:25.792 07:54:31 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.792 07:54:31 -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:08:25.792 07:54:31 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:25.792 07:54:31 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:25.792 07:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:25.792 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:25.792 ************************************ 00:08:25.792 START TEST dd_uring_copy 00:08:25.792 ************************************ 00:08:25.792 07:54:31 -- common/autotest_common.sh@1104 -- # uring_zram_copy 00:08:25.792 07:54:31 -- dd/uring.sh@15 -- # local zram_dev_id 00:08:25.792 07:54:31 -- dd/uring.sh@16 -- # local magic 00:08:25.792 07:54:31 -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:08:25.792 07:54:31 -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:25.792 07:54:31 -- dd/uring.sh@19 -- # local verify_magic 00:08:25.792 07:54:31 -- dd/uring.sh@21 -- # init_zram 00:08:25.792 07:54:31 -- dd/common.sh@163 -- # [[ -e /sys/class/zram-control ]] 00:08:25.792 07:54:31 -- dd/common.sh@164 -- # return 00:08:25.792 07:54:31 -- dd/uring.sh@22 -- # create_zram_dev 00:08:25.792 07:54:31 -- dd/common.sh@168 -- # cat /sys/class/zram-control/hot_add 00:08:25.792 07:54:31 -- dd/uring.sh@22 -- # zram_dev_id=1 00:08:25.792 07:54:31 -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:08:25.792 07:54:31 -- dd/common.sh@181 -- # local id=1 00:08:25.792 07:54:31 -- dd/common.sh@182 -- # local size=512M 00:08:25.792 07:54:31 -- dd/common.sh@184 -- # [[ -e /sys/block/zram1 ]] 00:08:25.792 07:54:31 -- dd/common.sh@186 -- # echo 512M 00:08:25.792 07:54:31 -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:08:25.792 07:54:31 -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:08:25.792 07:54:31 -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:08:25.792 07:54:31 -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:08:25.792 07:54:31 -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:25.792 07:54:31 -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:08:25.792 07:54:31 -- dd/uring.sh@41 -- # gen_bytes 1024 00:08:25.792 07:54:31 -- dd/common.sh@98 -- # xtrace_disable 00:08:25.792 07:54:31 -- common/autotest_common.sh@10 -- # set +x 00:08:25.792 07:54:31 -- dd/uring.sh@41 -- # magic=8aqesjgfc5pccxe03p9pgv1c0mmxnrsef71rsecwil1zks41taai87ax4474fv2fx1rsxecm0n9255ufzipuw1ca1f468fzli3mad6gornwdvxg05qxvzq2odbtdijqiijnrxvrr14tmpy7476j2a7ms3lyrhodq4bm1wseveydn5nltoun1tcm6kshrjl2a1ld94g405snxo5895b70n297beggv8c8x0sqnt3q7pbur7cdeeznc3o3gheo49f0ernb46qm7siz6vmrdlvfuag0oa45kqlob3c1a9z6a0ocmbz8ktv7wt4z98fj8rek5wdksaj6db96kkz8d4nbtka3ietgwfcs7haxx71bspm3g5gl6iuucu4tn6hf2i22fso97sunzwke9hoqo740w1lbtolgg3up71lzfo209tw43e0li7uyhqrc3smey6kg22roib6yustnfl5vstjqaivubfen577mgadmyntjie7rqdgzudcixt1zhsz5pzu9fmn5x9simhgvgymkixqnlev39bfl6fnnnt36ct9huxxaamc829bxtnl0xn9n6n7f7kjij8glxlcujmmkvetfi94mra1snjzdfv57o0da37yjsny415esiqg4nlbtczoisn83eoyynqzvlsojdgfl3tbyh0e7utikvxckz75crg54gy4ymapu0nc4v2xfrvfetnwym58hofxbh4t7fuir96pgq40fyn8mwu6sy9xo3c8wwdvu2sbtbk3lrys0wuqxqtp34gxnylfrecxt6sluns2d1306uryd10z97nr29bxs7061ktk1nhib23z1iwal01y5p1241god48mw4ijumgp09cse7yf27vwtcm82u5wt4m3c4hg4s95turvw3fr8xjts4oglvq06383u5kf0nlzz6gqnp2393k9wibaogypsxj5knaw1yuqz3f7eg5axm6yo1ixpr4xiqngot01gikrl8i3175g1ak3b3smsme74kl3jo1fy18ia4mwq0xx8 00:08:25.792 07:54:31 -- dd/uring.sh@42 -- # echo 8aqesjgfc5pccxe03p9pgv1c0mmxnrsef71rsecwil1zks41taai87ax4474fv2fx1rsxecm0n9255ufzipuw1ca1f468fzli3mad6gornwdvxg05qxvzq2odbtdijqiijnrxvrr14tmpy7476j2a7ms3lyrhodq4bm1wseveydn5nltoun1tcm6kshrjl2a1ld94g405snxo5895b70n297beggv8c8x0sqnt3q7pbur7cdeeznc3o3gheo49f0ernb46qm7siz6vmrdlvfuag0oa45kqlob3c1a9z6a0ocmbz8ktv7wt4z98fj8rek5wdksaj6db96kkz8d4nbtka3ietgwfcs7haxx71bspm3g5gl6iuucu4tn6hf2i22fso97sunzwke9hoqo740w1lbtolgg3up71lzfo209tw43e0li7uyhqrc3smey6kg22roib6yustnfl5vstjqaivubfen577mgadmyntjie7rqdgzudcixt1zhsz5pzu9fmn5x9simhgvgymkixqnlev39bfl6fnnnt36ct9huxxaamc829bxtnl0xn9n6n7f7kjij8glxlcujmmkvetfi94mra1snjzdfv57o0da37yjsny415esiqg4nlbtczoisn83eoyynqzvlsojdgfl3tbyh0e7utikvxckz75crg54gy4ymapu0nc4v2xfrvfetnwym58hofxbh4t7fuir96pgq40fyn8mwu6sy9xo3c8wwdvu2sbtbk3lrys0wuqxqtp34gxnylfrecxt6sluns2d1306uryd10z97nr29bxs7061ktk1nhib23z1iwal01y5p1241god48mw4ijumgp09cse7yf27vwtcm82u5wt4m3c4hg4s95turvw3fr8xjts4oglvq06383u5kf0nlzz6gqnp2393k9wibaogypsxj5knaw1yuqz3f7eg5axm6yo1ixpr4xiqngot01gikrl8i3175g1ak3b3smsme74kl3jo1fy18ia4mwq0xx8 00:08:25.792 07:54:31 -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:08:25.792 [2024-07-13 07:54:31.537954] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:25.792 [2024-07-13 07:54:31.538059] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69797 ] 00:08:26.051 [2024-07-13 07:54:31.671870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.051 [2024-07-13 07:54:31.705017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.569  Copying: 511/511 [MB] (average 1841 MBps) 00:08:26.569 00:08:26.569 07:54:32 -- dd/uring.sh@54 -- # gen_conf 00:08:26.569 07:54:32 -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:08:26.569 07:54:32 -- dd/common.sh@31 -- # xtrace_disable 00:08:26.569 07:54:32 -- common/autotest_common.sh@10 -- # set +x 00:08:26.569 [2024-07-13 07:54:32.382984] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:26.569 [2024-07-13 07:54:32.383111] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69800 ] 00:08:26.827 { 00:08:26.827 "subsystems": [ 00:08:26.827 { 00:08:26.827 "subsystem": "bdev", 00:08:26.827 "config": [ 00:08:26.827 { 00:08:26.827 "params": { 00:08:26.827 "block_size": 512, 00:08:26.827 "num_blocks": 1048576, 00:08:26.827 "name": "malloc0" 00:08:26.827 }, 00:08:26.827 "method": "bdev_malloc_create" 00:08:26.827 }, 00:08:26.827 { 00:08:26.827 "params": { 00:08:26.828 "filename": "/dev/zram1", 00:08:26.828 "name": "uring0" 00:08:26.828 }, 00:08:26.828 "method": "bdev_uring_create" 00:08:26.828 }, 00:08:26.828 { 00:08:26.828 "method": "bdev_wait_for_examine" 00:08:26.828 } 00:08:26.828 ] 00:08:26.828 } 00:08:26.828 ] 00:08:26.828 } 00:08:26.828 [2024-07-13 07:54:32.518769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.828 [2024-07-13 07:54:32.548305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.656  Copying: 228/512 [MB] (228 MBps) Copying: 428/512 [MB] (199 MBps) Copying: 512/512 [MB] (average 213 MBps) 00:08:29.656 00:08:29.656 07:54:35 -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:08:29.656 07:54:35 -- dd/uring.sh@60 -- # gen_conf 00:08:29.656 07:54:35 -- dd/common.sh@31 -- # xtrace_disable 00:08:29.656 07:54:35 -- common/autotest_common.sh@10 -- # set +x 00:08:29.656 [2024-07-13 07:54:35.374956] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:29.656 [2024-07-13 07:54:35.375066] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69825 ] 00:08:29.656 { 00:08:29.656 "subsystems": [ 00:08:29.656 { 00:08:29.656 "subsystem": "bdev", 00:08:29.656 "config": [ 00:08:29.656 { 00:08:29.656 "params": { 00:08:29.656 "block_size": 512, 00:08:29.656 "num_blocks": 1048576, 00:08:29.656 "name": "malloc0" 00:08:29.656 }, 00:08:29.656 "method": "bdev_malloc_create" 00:08:29.656 }, 00:08:29.656 { 00:08:29.656 "params": { 00:08:29.656 "filename": "/dev/zram1", 00:08:29.656 "name": "uring0" 00:08:29.656 }, 00:08:29.656 "method": "bdev_uring_create" 00:08:29.656 }, 00:08:29.656 { 00:08:29.656 "method": "bdev_wait_for_examine" 00:08:29.656 } 00:08:29.656 ] 00:08:29.656 } 00:08:29.656 ] 00:08:29.656 } 00:08:29.914 [2024-07-13 07:54:35.510312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.914 [2024-07-13 07:54:35.540479] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.679  Copying: 163/512 [MB] (163 MBps) Copying: 321/512 [MB] (158 MBps) Copying: 474/512 [MB] (153 MBps) Copying: 512/512 [MB] (average 150 MBps) 00:08:33.679 00:08:33.679 07:54:39 -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:08:33.679 07:54:39 -- dd/uring.sh@66 -- # [[ 8aqesjgfc5pccxe03p9pgv1c0mmxnrsef71rsecwil1zks41taai87ax4474fv2fx1rsxecm0n9255ufzipuw1ca1f468fzli3mad6gornwdvxg05qxvzq2odbtdijqiijnrxvrr14tmpy7476j2a7ms3lyrhodq4bm1wseveydn5nltoun1tcm6kshrjl2a1ld94g405snxo5895b70n297beggv8c8x0sqnt3q7pbur7cdeeznc3o3gheo49f0ernb46qm7siz6vmrdlvfuag0oa45kqlob3c1a9z6a0ocmbz8ktv7wt4z98fj8rek5wdksaj6db96kkz8d4nbtka3ietgwfcs7haxx71bspm3g5gl6iuucu4tn6hf2i22fso97sunzwke9hoqo740w1lbtolgg3up71lzfo209tw43e0li7uyhqrc3smey6kg22roib6yustnfl5vstjqaivubfen577mgadmyntjie7rqdgzudcixt1zhsz5pzu9fmn5x9simhgvgymkixqnlev39bfl6fnnnt36ct9huxxaamc829bxtnl0xn9n6n7f7kjij8glxlcujmmkvetfi94mra1snjzdfv57o0da37yjsny415esiqg4nlbtczoisn83eoyynqzvlsojdgfl3tbyh0e7utikvxckz75crg54gy4ymapu0nc4v2xfrvfetnwym58hofxbh4t7fuir96pgq40fyn8mwu6sy9xo3c8wwdvu2sbtbk3lrys0wuqxqtp34gxnylfrecxt6sluns2d1306uryd10z97nr29bxs7061ktk1nhib23z1iwal01y5p1241god48mw4ijumgp09cse7yf27vwtcm82u5wt4m3c4hg4s95turvw3fr8xjts4oglvq06383u5kf0nlzz6gqnp2393k9wibaogypsxj5knaw1yuqz3f7eg5axm6yo1ixpr4xiqngot01gikrl8i3175g1ak3b3smsme74kl3jo1fy18ia4mwq0xx8 == \8\a\q\e\s\j\g\f\c\5\p\c\c\x\e\0\3\p\9\p\g\v\1\c\0\m\m\x\n\r\s\e\f\7\1\r\s\e\c\w\i\l\1\z\k\s\4\1\t\a\a\i\8\7\a\x\4\4\7\4\f\v\2\f\x\1\r\s\x\e\c\m\0\n\9\2\5\5\u\f\z\i\p\u\w\1\c\a\1\f\4\6\8\f\z\l\i\3\m\a\d\6\g\o\r\n\w\d\v\x\g\0\5\q\x\v\z\q\2\o\d\b\t\d\i\j\q\i\i\j\n\r\x\v\r\r\1\4\t\m\p\y\7\4\7\6\j\2\a\7\m\s\3\l\y\r\h\o\d\q\4\b\m\1\w\s\e\v\e\y\d\n\5\n\l\t\o\u\n\1\t\c\m\6\k\s\h\r\j\l\2\a\1\l\d\9\4\g\4\0\5\s\n\x\o\5\8\9\5\b\7\0\n\2\9\7\b\e\g\g\v\8\c\8\x\0\s\q\n\t\3\q\7\p\b\u\r\7\c\d\e\e\z\n\c\3\o\3\g\h\e\o\4\9\f\0\e\r\n\b\4\6\q\m\7\s\i\z\6\v\m\r\d\l\v\f\u\a\g\0\o\a\4\5\k\q\l\o\b\3\c\1\a\9\z\6\a\0\o\c\m\b\z\8\k\t\v\7\w\t\4\z\9\8\f\j\8\r\e\k\5\w\d\k\s\a\j\6\d\b\9\6\k\k\z\8\d\4\n\b\t\k\a\3\i\e\t\g\w\f\c\s\7\h\a\x\x\7\1\b\s\p\m\3\g\5\g\l\6\i\u\u\c\u\4\t\n\6\h\f\2\i\2\2\f\s\o\9\7\s\u\n\z\w\k\e\9\h\o\q\o\7\4\0\w\1\l\b\t\o\l\g\g\3\u\p\7\1\l\z\f\o\2\0\9\t\w\4\3\e\0\l\i\7\u\y\h\q\r\c\3\s\m\e\y\6\k\g\2\2\r\o\i\b\6\y\u\s\t\n\f\l\5\v\s\t\j\q\a\i\v\u\b\f\e\n\5\7\7\m\g\a\d\m\y\n\t\j\i\e\7\r\q\d\g\z\u\d\c\i\x\t\1\z\h\s\z\5\p\z\u\9\f\m\n\5\x\9\s\i\m\h\g\v\g\y\m\k\i\x\q\n\l\e\v\3\9\b\f\l\6\f\n\n\n\t\3\6\c\t\9\h\u\x\x\a\a\m\c\8\2\9\b\x\t\n\l\0\x\n\9\n\6\n\7\f\7\k\j\i\j\8\g\l\x\l\c\u\j\m\m\k\v\e\t\f\i\9\4\m\r\a\1\s\n\j\z\d\f\v\5\7\o\0\d\a\3\7\y\j\s\n\y\4\1\5\e\s\i\q\g\4\n\l\b\t\c\z\o\i\s\n\8\3\e\o\y\y\n\q\z\v\l\s\o\j\d\g\f\l\3\t\b\y\h\0\e\7\u\t\i\k\v\x\c\k\z\7\5\c\r\g\5\4\g\y\4\y\m\a\p\u\0\n\c\4\v\2\x\f\r\v\f\e\t\n\w\y\m\5\8\h\o\f\x\b\h\4\t\7\f\u\i\r\9\6\p\g\q\4\0\f\y\n\8\m\w\u\6\s\y\9\x\o\3\c\8\w\w\d\v\u\2\s\b\t\b\k\3\l\r\y\s\0\w\u\q\x\q\t\p\3\4\g\x\n\y\l\f\r\e\c\x\t\6\s\l\u\n\s\2\d\1\3\0\6\u\r\y\d\1\0\z\9\7\n\r\2\9\b\x\s\7\0\6\1\k\t\k\1\n\h\i\b\2\3\z\1\i\w\a\l\0\1\y\5\p\1\2\4\1\g\o\d\4\8\m\w\4\i\j\u\m\g\p\0\9\c\s\e\7\y\f\2\7\v\w\t\c\m\8\2\u\5\w\t\4\m\3\c\4\h\g\4\s\9\5\t\u\r\v\w\3\f\r\8\x\j\t\s\4\o\g\l\v\q\0\6\3\8\3\u\5\k\f\0\n\l\z\z\6\g\q\n\p\2\3\9\3\k\9\w\i\b\a\o\g\y\p\s\x\j\5\k\n\a\w\1\y\u\q\z\3\f\7\e\g\5\a\x\m\6\y\o\1\i\x\p\r\4\x\i\q\n\g\o\t\0\1\g\i\k\r\l\8\i\3\1\7\5\g\1\a\k\3\b\3\s\m\s\m\e\7\4\k\l\3\j\o\1\f\y\1\8\i\a\4\m\w\q\0\x\x\8 ]] 00:08:33.679 07:54:39 -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:08:33.680 07:54:39 -- dd/uring.sh@69 -- # [[ 8aqesjgfc5pccxe03p9pgv1c0mmxnrsef71rsecwil1zks41taai87ax4474fv2fx1rsxecm0n9255ufzipuw1ca1f468fzli3mad6gornwdvxg05qxvzq2odbtdijqiijnrxvrr14tmpy7476j2a7ms3lyrhodq4bm1wseveydn5nltoun1tcm6kshrjl2a1ld94g405snxo5895b70n297beggv8c8x0sqnt3q7pbur7cdeeznc3o3gheo49f0ernb46qm7siz6vmrdlvfuag0oa45kqlob3c1a9z6a0ocmbz8ktv7wt4z98fj8rek5wdksaj6db96kkz8d4nbtka3ietgwfcs7haxx71bspm3g5gl6iuucu4tn6hf2i22fso97sunzwke9hoqo740w1lbtolgg3up71lzfo209tw43e0li7uyhqrc3smey6kg22roib6yustnfl5vstjqaivubfen577mgadmyntjie7rqdgzudcixt1zhsz5pzu9fmn5x9simhgvgymkixqnlev39bfl6fnnnt36ct9huxxaamc829bxtnl0xn9n6n7f7kjij8glxlcujmmkvetfi94mra1snjzdfv57o0da37yjsny415esiqg4nlbtczoisn83eoyynqzvlsojdgfl3tbyh0e7utikvxckz75crg54gy4ymapu0nc4v2xfrvfetnwym58hofxbh4t7fuir96pgq40fyn8mwu6sy9xo3c8wwdvu2sbtbk3lrys0wuqxqtp34gxnylfrecxt6sluns2d1306uryd10z97nr29bxs7061ktk1nhib23z1iwal01y5p1241god48mw4ijumgp09cse7yf27vwtcm82u5wt4m3c4hg4s95turvw3fr8xjts4oglvq06383u5kf0nlzz6gqnp2393k9wibaogypsxj5knaw1yuqz3f7eg5axm6yo1ixpr4xiqngot01gikrl8i3175g1ak3b3smsme74kl3jo1fy18ia4mwq0xx8 == \8\a\q\e\s\j\g\f\c\5\p\c\c\x\e\0\3\p\9\p\g\v\1\c\0\m\m\x\n\r\s\e\f\7\1\r\s\e\c\w\i\l\1\z\k\s\4\1\t\a\a\i\8\7\a\x\4\4\7\4\f\v\2\f\x\1\r\s\x\e\c\m\0\n\9\2\5\5\u\f\z\i\p\u\w\1\c\a\1\f\4\6\8\f\z\l\i\3\m\a\d\6\g\o\r\n\w\d\v\x\g\0\5\q\x\v\z\q\2\o\d\b\t\d\i\j\q\i\i\j\n\r\x\v\r\r\1\4\t\m\p\y\7\4\7\6\j\2\a\7\m\s\3\l\y\r\h\o\d\q\4\b\m\1\w\s\e\v\e\y\d\n\5\n\l\t\o\u\n\1\t\c\m\6\k\s\h\r\j\l\2\a\1\l\d\9\4\g\4\0\5\s\n\x\o\5\8\9\5\b\7\0\n\2\9\7\b\e\g\g\v\8\c\8\x\0\s\q\n\t\3\q\7\p\b\u\r\7\c\d\e\e\z\n\c\3\o\3\g\h\e\o\4\9\f\0\e\r\n\b\4\6\q\m\7\s\i\z\6\v\m\r\d\l\v\f\u\a\g\0\o\a\4\5\k\q\l\o\b\3\c\1\a\9\z\6\a\0\o\c\m\b\z\8\k\t\v\7\w\t\4\z\9\8\f\j\8\r\e\k\5\w\d\k\s\a\j\6\d\b\9\6\k\k\z\8\d\4\n\b\t\k\a\3\i\e\t\g\w\f\c\s\7\h\a\x\x\7\1\b\s\p\m\3\g\5\g\l\6\i\u\u\c\u\4\t\n\6\h\f\2\i\2\2\f\s\o\9\7\s\u\n\z\w\k\e\9\h\o\q\o\7\4\0\w\1\l\b\t\o\l\g\g\3\u\p\7\1\l\z\f\o\2\0\9\t\w\4\3\e\0\l\i\7\u\y\h\q\r\c\3\s\m\e\y\6\k\g\2\2\r\o\i\b\6\y\u\s\t\n\f\l\5\v\s\t\j\q\a\i\v\u\b\f\e\n\5\7\7\m\g\a\d\m\y\n\t\j\i\e\7\r\q\d\g\z\u\d\c\i\x\t\1\z\h\s\z\5\p\z\u\9\f\m\n\5\x\9\s\i\m\h\g\v\g\y\m\k\i\x\q\n\l\e\v\3\9\b\f\l\6\f\n\n\n\t\3\6\c\t\9\h\u\x\x\a\a\m\c\8\2\9\b\x\t\n\l\0\x\n\9\n\6\n\7\f\7\k\j\i\j\8\g\l\x\l\c\u\j\m\m\k\v\e\t\f\i\9\4\m\r\a\1\s\n\j\z\d\f\v\5\7\o\0\d\a\3\7\y\j\s\n\y\4\1\5\e\s\i\q\g\4\n\l\b\t\c\z\o\i\s\n\8\3\e\o\y\y\n\q\z\v\l\s\o\j\d\g\f\l\3\t\b\y\h\0\e\7\u\t\i\k\v\x\c\k\z\7\5\c\r\g\5\4\g\y\4\y\m\a\p\u\0\n\c\4\v\2\x\f\r\v\f\e\t\n\w\y\m\5\8\h\o\f\x\b\h\4\t\7\f\u\i\r\9\6\p\g\q\4\0\f\y\n\8\m\w\u\6\s\y\9\x\o\3\c\8\w\w\d\v\u\2\s\b\t\b\k\3\l\r\y\s\0\w\u\q\x\q\t\p\3\4\g\x\n\y\l\f\r\e\c\x\t\6\s\l\u\n\s\2\d\1\3\0\6\u\r\y\d\1\0\z\9\7\n\r\2\9\b\x\s\7\0\6\1\k\t\k\1\n\h\i\b\2\3\z\1\i\w\a\l\0\1\y\5\p\1\2\4\1\g\o\d\4\8\m\w\4\i\j\u\m\g\p\0\9\c\s\e\7\y\f\2\7\v\w\t\c\m\8\2\u\5\w\t\4\m\3\c\4\h\g\4\s\9\5\t\u\r\v\w\3\f\r\8\x\j\t\s\4\o\g\l\v\q\0\6\3\8\3\u\5\k\f\0\n\l\z\z\6\g\q\n\p\2\3\9\3\k\9\w\i\b\a\o\g\y\p\s\x\j\5\k\n\a\w\1\y\u\q\z\3\f\7\e\g\5\a\x\m\6\y\o\1\i\x\p\r\4\x\i\q\n\g\o\t\0\1\g\i\k\r\l\8\i\3\1\7\5\g\1\a\k\3\b\3\s\m\s\m\e\7\4\k\l\3\j\o\1\f\y\1\8\i\a\4\m\w\q\0\x\x\8 ]] 00:08:33.680 07:54:39 -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:33.938 07:54:39 -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:08:33.938 07:54:39 -- dd/uring.sh@75 -- # gen_conf 00:08:33.938 07:54:39 -- dd/common.sh@31 -- # xtrace_disable 00:08:33.938 07:54:39 -- common/autotest_common.sh@10 -- # set +x 00:08:33.938 [2024-07-13 07:54:39.750629] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:33.938 [2024-07-13 07:54:39.750714] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69879 ] 00:08:34.197 { 00:08:34.197 "subsystems": [ 00:08:34.197 { 00:08:34.197 "subsystem": "bdev", 00:08:34.197 "config": [ 00:08:34.197 { 00:08:34.197 "params": { 00:08:34.197 "block_size": 512, 00:08:34.197 "num_blocks": 1048576, 00:08:34.197 "name": "malloc0" 00:08:34.197 }, 00:08:34.197 "method": "bdev_malloc_create" 00:08:34.197 }, 00:08:34.197 { 00:08:34.197 "params": { 00:08:34.197 "filename": "/dev/zram1", 00:08:34.197 "name": "uring0" 00:08:34.197 }, 00:08:34.197 "method": "bdev_uring_create" 00:08:34.197 }, 00:08:34.197 { 00:08:34.197 "method": "bdev_wait_for_examine" 00:08:34.197 } 00:08:34.197 ] 00:08:34.197 } 00:08:34.197 ] 00:08:34.197 } 00:08:34.197 [2024-07-13 07:54:39.886957] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.197 [2024-07-13 07:54:39.929407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.271  Copying: 141/512 [MB] (141 MBps) Copying: 288/512 [MB] (146 MBps) Copying: 433/512 [MB] (144 MBps) Copying: 512/512 [MB] (average 144 MBps) 00:08:38.271 00:08:38.271 07:54:43 -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:08:38.271 07:54:43 -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:08:38.271 07:54:43 -- dd/uring.sh@87 -- # : 00:08:38.271 07:54:43 -- dd/uring.sh@87 -- # : 00:08:38.271 07:54:43 -- dd/uring.sh@87 -- # gen_conf 00:08:38.271 07:54:43 -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:08:38.271 07:54:43 -- dd/common.sh@31 -- # xtrace_disable 00:08:38.271 07:54:43 -- common/autotest_common.sh@10 -- # set +x 00:08:38.271 [2024-07-13 07:54:43.942597] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:38.271 [2024-07-13 07:54:43.942694] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69911 ] 00:08:38.271 { 00:08:38.271 "subsystems": [ 00:08:38.271 { 00:08:38.271 "subsystem": "bdev", 00:08:38.271 "config": [ 00:08:38.271 { 00:08:38.271 "params": { 00:08:38.271 "block_size": 512, 00:08:38.271 "num_blocks": 1048576, 00:08:38.271 "name": "malloc0" 00:08:38.271 }, 00:08:38.271 "method": "bdev_malloc_create" 00:08:38.271 }, 00:08:38.271 { 00:08:38.271 "params": { 00:08:38.271 "filename": "/dev/zram1", 00:08:38.271 "name": "uring0" 00:08:38.271 }, 00:08:38.271 "method": "bdev_uring_create" 00:08:38.271 }, 00:08:38.271 { 00:08:38.271 "params": { 00:08:38.271 "name": "uring0" 00:08:38.271 }, 00:08:38.271 "method": "bdev_uring_delete" 00:08:38.271 }, 00:08:38.271 { 00:08:38.271 "method": "bdev_wait_for_examine" 00:08:38.271 } 00:08:38.271 ] 00:08:38.271 } 00:08:38.271 ] 00:08:38.271 } 00:08:38.271 [2024-07-13 07:54:44.078416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.530 [2024-07-13 07:54:44.117686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.788  Copying: 0/0 [B] (average 0 Bps) 00:08:38.788 00:08:38.788 07:54:44 -- dd/uring.sh@94 -- # : 00:08:38.788 07:54:44 -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:38.788 07:54:44 -- dd/uring.sh@94 -- # gen_conf 00:08:38.788 07:54:44 -- common/autotest_common.sh@640 -- # local es=0 00:08:38.788 07:54:44 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:38.788 07:54:44 -- dd/common.sh@31 -- # xtrace_disable 00:08:38.788 07:54:44 -- common/autotest_common.sh@10 -- # set +x 00:08:38.788 07:54:44 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.788 07:54:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:38.788 07:54:44 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.788 07:54:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:38.788 07:54:44 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.788 07:54:44 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:38.788 07:54:44 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:38.788 07:54:44 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:38.788 07:54:44 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:08:39.046 [2024-07-13 07:54:44.605787] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:39.046 [2024-07-13 07:54:44.605890] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69933 ] 00:08:39.046 { 00:08:39.046 "subsystems": [ 00:08:39.046 { 00:08:39.046 "subsystem": "bdev", 00:08:39.046 "config": [ 00:08:39.046 { 00:08:39.046 "params": { 00:08:39.046 "block_size": 512, 00:08:39.046 "num_blocks": 1048576, 00:08:39.046 "name": "malloc0" 00:08:39.046 }, 00:08:39.046 "method": "bdev_malloc_create" 00:08:39.046 }, 00:08:39.046 { 00:08:39.046 "params": { 00:08:39.046 "filename": "/dev/zram1", 00:08:39.046 "name": "uring0" 00:08:39.046 }, 00:08:39.046 "method": "bdev_uring_create" 00:08:39.046 }, 00:08:39.046 { 00:08:39.046 "params": { 00:08:39.046 "name": "uring0" 00:08:39.046 }, 00:08:39.046 "method": "bdev_uring_delete" 00:08:39.046 }, 00:08:39.046 { 00:08:39.046 "method": "bdev_wait_for_examine" 00:08:39.046 } 00:08:39.046 ] 00:08:39.046 } 00:08:39.046 ] 00:08:39.046 } 00:08:39.046 [2024-07-13 07:54:44.742915] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:39.046 [2024-07-13 07:54:44.776421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.305 [2024-07-13 07:54:44.927083] bdev.c:8019:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:08:39.305 [2024-07-13 07:54:44.927134] spdk_dd.c: 932:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:08:39.305 [2024-07-13 07:54:44.927162] spdk_dd.c:1074:dd_run: *ERROR*: uring0: No such device 00:08:39.305 [2024-07-13 07:54:44.927188] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:39.305 [2024-07-13 07:54:45.095663] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:39.575 07:54:45 -- common/autotest_common.sh@643 -- # es=237 00:08:39.575 07:54:45 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:39.575 07:54:45 -- common/autotest_common.sh@652 -- # es=109 00:08:39.575 07:54:45 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:39.575 07:54:45 -- common/autotest_common.sh@660 -- # es=1 00:08:39.575 07:54:45 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:39.575 07:54:45 -- dd/uring.sh@99 -- # remove_zram_dev 1 00:08:39.575 07:54:45 -- dd/common.sh@172 -- # local id=1 00:08:39.575 07:54:45 -- dd/common.sh@174 -- # [[ -e /sys/block/zram1 ]] 00:08:39.575 07:54:45 -- dd/common.sh@176 -- # echo 1 00:08:39.575 07:54:45 -- dd/common.sh@177 -- # echo 1 00:08:39.575 07:54:45 -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:08:39.846 00:08:39.846 real 0m13.966s 00:08:39.846 user 0m8.089s 00:08:39.846 sys 0m5.160s 00:08:39.846 ************************************ 00:08:39.846 END TEST dd_uring_copy 00:08:39.846 ************************************ 00:08:39.846 07:54:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.846 07:54:45 -- common/autotest_common.sh@10 -- # set +x 00:08:39.846 ************************************ 00:08:39.846 END TEST spdk_dd_uring 00:08:39.846 ************************************ 00:08:39.846 00:08:39.846 real 0m14.102s 00:08:39.846 user 0m8.143s 00:08:39.846 sys 0m5.241s 00:08:39.846 07:54:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:39.846 07:54:45 -- common/autotest_common.sh@10 -- # set +x 00:08:39.846 07:54:45 -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:39.846 07:54:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:39.846 07:54:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.846 07:54:45 -- common/autotest_common.sh@10 -- # set +x 00:08:39.846 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:39.846 ************************************ 00:08:39.846 START TEST spdk_dd_sparse 00:08:39.846 ************************************ 00:08:39.846 07:54:45 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:08:39.846 * Looking for test storage... 00:08:39.846 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:39.846 07:54:45 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:39.846 07:54:45 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.846 07:54:45 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.846 07:54:45 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.846 07:54:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.846 07:54:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.846 07:54:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.846 07:54:45 -- paths/export.sh@5 -- # export PATH 00:08:39.846 07:54:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.846 07:54:45 -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:08:39.846 07:54:45 -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:08:39.846 07:54:45 -- dd/sparse.sh@110 -- # file1=file_zero1 00:08:39.846 07:54:45 -- dd/sparse.sh@111 -- # file2=file_zero2 00:08:39.846 07:54:45 -- dd/sparse.sh@112 -- # file3=file_zero3 00:08:39.846 07:54:45 -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:08:39.846 07:54:45 -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:08:39.846 07:54:45 -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:08:39.846 07:54:45 -- dd/sparse.sh@118 -- # prepare 00:08:39.846 07:54:45 -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:08:39.846 07:54:45 -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:08:39.846 1+0 records in 00:08:39.846 1+0 records out 00:08:39.846 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00587979 s, 713 MB/s 00:08:39.846 07:54:45 -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:08:39.846 1+0 records in 00:08:39.846 1+0 records out 00:08:39.846 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00643898 s, 651 MB/s 00:08:39.846 07:54:45 -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:08:39.846 1+0 records in 00:08:39.846 1+0 records out 00:08:39.846 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.0063571 s, 660 MB/s 00:08:39.846 07:54:45 -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:08:39.846 07:54:45 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:39.846 07:54:45 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:39.846 07:54:45 -- common/autotest_common.sh@10 -- # set +x 00:08:39.846 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:39.846 ************************************ 00:08:39.846 START TEST dd_sparse_file_to_file 00:08:39.846 ************************************ 00:08:39.846 07:54:45 -- common/autotest_common.sh@1104 -- # file_to_file 00:08:39.846 07:54:45 -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:08:39.846 07:54:45 -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:08:39.846 07:54:45 -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:39.846 07:54:45 -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:08:39.846 07:54:45 -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:08:39.846 07:54:45 -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:08:39.846 07:54:45 -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:08:39.846 07:54:45 -- dd/sparse.sh@41 -- # gen_conf 00:08:39.847 07:54:45 -- dd/common.sh@31 -- # xtrace_disable 00:08:39.847 07:54:45 -- common/autotest_common.sh@10 -- # set +x 00:08:40.105 [2024-07-13 07:54:45.705219] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:40.105 [2024-07-13 07:54:45.705512] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70013 ] 00:08:40.105 { 00:08:40.105 "subsystems": [ 00:08:40.105 { 00:08:40.105 "subsystem": "bdev", 00:08:40.105 "config": [ 00:08:40.105 { 00:08:40.105 "params": { 00:08:40.105 "block_size": 4096, 00:08:40.105 "filename": "dd_sparse_aio_disk", 00:08:40.105 "name": "dd_aio" 00:08:40.105 }, 00:08:40.105 "method": "bdev_aio_create" 00:08:40.105 }, 00:08:40.105 { 00:08:40.105 "params": { 00:08:40.105 "lvs_name": "dd_lvstore", 00:08:40.105 "bdev_name": "dd_aio" 00:08:40.105 }, 00:08:40.105 "method": "bdev_lvol_create_lvstore" 00:08:40.105 }, 00:08:40.105 { 00:08:40.105 "method": "bdev_wait_for_examine" 00:08:40.105 } 00:08:40.105 ] 00:08:40.105 } 00:08:40.105 ] 00:08:40.105 } 00:08:40.105 [2024-07-13 07:54:45.843741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.105 [2024-07-13 07:54:45.878452] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.363  Copying: 12/36 [MB] (average 1714 MBps) 00:08:40.363 00:08:40.363 07:54:46 -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:08:40.363 07:54:46 -- dd/sparse.sh@47 -- # stat1_s=37748736 00:08:40.363 07:54:46 -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:08:40.363 07:54:46 -- dd/sparse.sh@48 -- # stat2_s=37748736 00:08:40.363 07:54:46 -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:40.363 07:54:46 -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:08:40.621 07:54:46 -- dd/sparse.sh@52 -- # stat1_b=24576 00:08:40.621 07:54:46 -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:08:40.621 07:54:46 -- dd/sparse.sh@53 -- # stat2_b=24576 00:08:40.621 07:54:46 -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:40.621 00:08:40.621 real 0m0.530s 00:08:40.621 user 0m0.296s 00:08:40.621 sys 0m0.143s 00:08:40.621 07:54:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.621 ************************************ 00:08:40.621 END TEST dd_sparse_file_to_file 00:08:40.621 ************************************ 00:08:40.621 07:54:46 -- common/autotest_common.sh@10 -- # set +x 00:08:40.621 07:54:46 -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:08:40.621 07:54:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:40.621 07:54:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:40.621 07:54:46 -- common/autotest_common.sh@10 -- # set +x 00:08:40.621 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:40.621 ************************************ 00:08:40.621 START TEST dd_sparse_file_to_bdev 00:08:40.621 ************************************ 00:08:40.621 07:54:46 -- common/autotest_common.sh@1104 -- # file_to_bdev 00:08:40.621 07:54:46 -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:40.621 07:54:46 -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:08:40.621 07:54:46 -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size']='37748736' ['thin_provision']='true') 00:08:40.621 07:54:46 -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:08:40.621 07:54:46 -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:08:40.621 07:54:46 -- dd/sparse.sh@73 -- # gen_conf 00:08:40.621 07:54:46 -- dd/common.sh@31 -- # xtrace_disable 00:08:40.621 07:54:46 -- common/autotest_common.sh@10 -- # set +x 00:08:40.621 [2024-07-13 07:54:46.287687] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:40.621 [2024-07-13 07:54:46.287793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70052 ] 00:08:40.621 { 00:08:40.621 "subsystems": [ 00:08:40.621 { 00:08:40.621 "subsystem": "bdev", 00:08:40.621 "config": [ 00:08:40.621 { 00:08:40.621 "params": { 00:08:40.621 "block_size": 4096, 00:08:40.621 "filename": "dd_sparse_aio_disk", 00:08:40.621 "name": "dd_aio" 00:08:40.621 }, 00:08:40.621 "method": "bdev_aio_create" 00:08:40.621 }, 00:08:40.621 { 00:08:40.621 "params": { 00:08:40.621 "lvs_name": "dd_lvstore", 00:08:40.621 "lvol_name": "dd_lvol", 00:08:40.621 "size": 37748736, 00:08:40.621 "thin_provision": true 00:08:40.621 }, 00:08:40.621 "method": "bdev_lvol_create" 00:08:40.621 }, 00:08:40.621 { 00:08:40.622 "method": "bdev_wait_for_examine" 00:08:40.622 } 00:08:40.622 ] 00:08:40.622 } 00:08:40.622 ] 00:08:40.622 } 00:08:40.622 [2024-07-13 07:54:46.424321] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.879 [2024-07-13 07:54:46.459035] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.879 [2024-07-13 07:54:46.520130] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:08:40.879  Copying: 12/36 [MB] (average 545 MBps)[2024-07-13 07:54:46.558988] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:08:41.137 00:08:41.137 00:08:41.137 ************************************ 00:08:41.137 END TEST dd_sparse_file_to_bdev 00:08:41.137 ************************************ 00:08:41.137 00:08:41.137 real 0m0.505s 00:08:41.137 user 0m0.313s 00:08:41.137 sys 0m0.119s 00:08:41.137 07:54:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.137 07:54:46 -- common/autotest_common.sh@10 -- # set +x 00:08:41.137 07:54:46 -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:08:41.137 07:54:46 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:41.137 07:54:46 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:41.137 07:54:46 -- common/autotest_common.sh@10 -- # set +x 00:08:41.137 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:41.137 ************************************ 00:08:41.137 START TEST dd_sparse_bdev_to_file 00:08:41.137 ************************************ 00:08:41.137 07:54:46 -- common/autotest_common.sh@1104 -- # bdev_to_file 00:08:41.137 07:54:46 -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:08:41.137 07:54:46 -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:08:41.137 07:54:46 -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:08:41.137 07:54:46 -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:08:41.137 07:54:46 -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:08:41.137 07:54:46 -- dd/sparse.sh@91 -- # gen_conf 00:08:41.137 07:54:46 -- dd/common.sh@31 -- # xtrace_disable 00:08:41.137 07:54:46 -- common/autotest_common.sh@10 -- # set +x 00:08:41.137 [2024-07-13 07:54:46.845567] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:41.137 [2024-07-13 07:54:46.845653] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70079 ] 00:08:41.137 { 00:08:41.137 "subsystems": [ 00:08:41.137 { 00:08:41.137 "subsystem": "bdev", 00:08:41.137 "config": [ 00:08:41.137 { 00:08:41.137 "params": { 00:08:41.137 "block_size": 4096, 00:08:41.137 "filename": "dd_sparse_aio_disk", 00:08:41.137 "name": "dd_aio" 00:08:41.137 }, 00:08:41.137 "method": "bdev_aio_create" 00:08:41.137 }, 00:08:41.137 { 00:08:41.137 "method": "bdev_wait_for_examine" 00:08:41.137 } 00:08:41.137 ] 00:08:41.137 } 00:08:41.137 ] 00:08:41.137 } 00:08:41.396 [2024-07-13 07:54:46.983414] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.396 [2024-07-13 07:54:47.017040] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.655  Copying: 12/36 [MB] (average 1333 MBps) 00:08:41.655 00:08:41.655 07:54:47 -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:08:41.655 07:54:47 -- dd/sparse.sh@97 -- # stat2_s=37748736 00:08:41.655 07:54:47 -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:08:41.655 07:54:47 -- dd/sparse.sh@98 -- # stat3_s=37748736 00:08:41.655 07:54:47 -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:08:41.655 07:54:47 -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:08:41.655 07:54:47 -- dd/sparse.sh@102 -- # stat2_b=24576 00:08:41.655 07:54:47 -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:08:41.655 07:54:47 -- dd/sparse.sh@103 -- # stat3_b=24576 00:08:41.655 07:54:47 -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:08:41.655 00:08:41.655 real 0m0.491s 00:08:41.655 user 0m0.287s 00:08:41.655 sys 0m0.122s 00:08:41.655 07:54:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.655 07:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.655 ************************************ 00:08:41.655 END TEST dd_sparse_bdev_to_file 00:08:41.655 ************************************ 00:08:41.655 07:54:47 -- dd/sparse.sh@1 -- # cleanup 00:08:41.655 07:54:47 -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:08:41.655 07:54:47 -- dd/sparse.sh@12 -- # rm file_zero1 00:08:41.655 07:54:47 -- dd/sparse.sh@13 -- # rm file_zero2 00:08:41.655 07:54:47 -- dd/sparse.sh@14 -- # rm file_zero3 00:08:41.655 00:08:41.655 real 0m1.826s 00:08:41.655 user 0m0.995s 00:08:41.655 sys 0m0.567s 00:08:41.655 07:54:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.655 07:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.655 ************************************ 00:08:41.655 END TEST spdk_dd_sparse 00:08:41.655 ************************************ 00:08:41.655 07:54:47 -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:41.655 07:54:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:41.655 07:54:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:41.655 07:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.655 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:41.655 ************************************ 00:08:41.655 START TEST spdk_dd_negative 00:08:41.655 ************************************ 00:08:41.655 07:54:47 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:08:41.914 * Looking for test storage... 00:08:41.914 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:41.914 07:54:47 -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:41.914 07:54:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:41.914 07:54:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:41.914 07:54:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:41.914 07:54:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.914 07:54:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.914 07:54:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.914 07:54:47 -- paths/export.sh@5 -- # export PATH 00:08:41.914 07:54:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:41.914 07:54:47 -- dd/negative_dd.sh@101 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:41.914 07:54:47 -- dd/negative_dd.sh@102 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:41.914 07:54:47 -- dd/negative_dd.sh@104 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:41.914 07:54:47 -- dd/negative_dd.sh@105 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:41.914 07:54:47 -- dd/negative_dd.sh@107 -- # run_test dd_invalid_arguments invalid_arguments 00:08:41.914 07:54:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:41.914 07:54:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:41.914 07:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.914 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:41.914 ************************************ 00:08:41.914 START TEST dd_invalid_arguments 00:08:41.914 ************************************ 00:08:41.914 07:54:47 -- common/autotest_common.sh@1104 -- # invalid_arguments 00:08:41.914 07:54:47 -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:41.914 07:54:47 -- common/autotest_common.sh@640 -- # local es=0 00:08:41.914 07:54:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:41.914 07:54:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.914 07:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:41.914 07:54:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.914 07:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:41.914 07:54:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.914 07:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:41.914 07:54:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.914 07:54:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:41.914 07:54:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:08:41.914 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:08:41.914 options: 00:08:41.914 -c, --config JSON config file (default none) 00:08:41.914 --json JSON config file (default none) 00:08:41.914 --json-ignore-init-errors 00:08:41.914 don't exit on invalid config entry 00:08:41.914 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:08:41.914 -g, --single-file-segments 00:08:41.914 force creating just one hugetlbfs file 00:08:41.914 -h, --help show this usage 00:08:41.914 -i, --shm-id shared memory ID (optional) 00:08:41.914 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced (like [0,1,10]) for DPDK 00:08:41.914 --lcores lcore to CPU mapping list. The list is in the format: 00:08:41.915 [<,lcores[@CPUs]>...] 00:08:41.915 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:08:41.915 Within the group, '-' is used for range separator, 00:08:41.915 ',' is used for single number separator. 00:08:41.915 '( )' can be omitted for single element group, 00:08:41.915 '@' can be omitted if cpus and lcores have the same value 00:08:41.915 -n, --mem-channels channel number of memory channels used for DPDK 00:08:41.915 -p, --main-core main (primary) core for DPDK 00:08:41.915 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:08:41.915 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:08:41.915 --disable-cpumask-locks Disable CPU core lock files. 00:08:41.915 --silence-noticelog disable notice level logging to stderr 00:08:41.915 --msg-mempool-size global message memory pool size in count (default: 262143) 00:08:41.915 -u, --no-pci disable PCI access 00:08:41.915 --wait-for-rpc wait for RPCs to initialize subsystems 00:08:41.915 --max-delay maximum reactor delay (in microseconds) 00:08:41.915 -B, --pci-blocked pci addr to block (can be used more than once) 00:08:41.915 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:08:41.915 -R, --huge-unlink unlink huge files after initialization 00:08:41.915 -v, --version print SPDK version 00:08:41.915 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:08:41.915 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:08:41.915 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:08:41.915 --num-trace-entries number of trace entries for each core, must be power of 2, setting 0 to disable trace (default 32768) 00:08:41.915 Tracepoints vary in size and can use more than one trace entry. 00:08:41.915 --rpcs-allowed comma-separated list of permitted RPCS 00:08:41.915 --env-context Opaque context for use of the env implementation 00:08:41.915 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:08:41.915 --no-huge run without using hugepages 00:08:41.915 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, blobfs_rw, ftl_core, ftl_init, gpt_parse, idxd, ioat, iscsi_init, json_util, log, log_rpc, lvol, lvol_rpc, notify_rpc, nvme, nvme_cuse, nvme_vfio, opal, reactor, rpc, rpc_client, sock, sock_posix, thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, virtio, virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:08:41.915 -e, --tpoint-group [:] 00:08:41.915 group_name - tracepoint group name for spdk trace buffers (bdev, ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, all) 00:08:41.915 tpoint_mask - tracepoint mask for enabling individual tpoints inside a tracepoint group. First tpoint inside a group can be enabled by setting tpoint_mask to 1 (e.g. bdev:0x1). 00:08:41.915 Groups and masks /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:08:41.915 [2024-07-13 07:54:47.554610] spdk_dd.c:1460:main: *ERROR*: Invalid arguments 00:08:41.915 can be combined (e.g. thread,bdev:0x1). 00:08:41.915 All available tpoints can be found in /include/spdk_internal/trace_defs.h 00:08:41.915 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all pollers in the app support interrupt mode) 00:08:41.915 [--------- DD Options ---------] 00:08:41.915 --if Input file. Must specify either --if or --ib. 00:08:41.915 --ib Input bdev. Must specifier either --if or --ib 00:08:41.915 --of Output file. Must specify either --of or --ob. 00:08:41.915 --ob Output bdev. Must specify either --of or --ob. 00:08:41.915 --iflag Input file flags. 00:08:41.915 --oflag Output file flags. 00:08:41.915 --bs I/O unit size (default: 4096) 00:08:41.915 --qd Queue depth (default: 2) 00:08:41.915 --count I/O unit count. The number of I/O units to copy. (default: all) 00:08:41.915 --skip Skip this many I/O units at start of input. (default: 0) 00:08:41.915 --seek Skip this many I/O units at start of output. (default: 0) 00:08:41.915 --aio Force usage of AIO. (by default io_uring is used if available) 00:08:41.915 --sparse Enable hole skipping in input target 00:08:41.915 Available iflag and oflag values: 00:08:41.915 append - append mode 00:08:41.915 direct - use direct I/O for data 00:08:41.915 directory - fail unless a directory 00:08:41.915 dsync - use synchronized I/O for data 00:08:41.915 noatime - do not update access time 00:08:41.915 noctty - do not assign controlling terminal from file 00:08:41.915 nofollow - do not follow symlinks 00:08:41.915 nonblock - use non-blocking I/O 00:08:41.915 sync - use synchronized I/O for data and metadata 00:08:41.915 07:54:47 -- common/autotest_common.sh@643 -- # es=2 00:08:41.915 07:54:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:41.915 07:54:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:41.915 07:54:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:41.915 00:08:41.915 real 0m0.068s 00:08:41.915 user 0m0.044s 00:08:41.915 sys 0m0.023s 00:08:41.915 07:54:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.915 ************************************ 00:08:41.915 END TEST dd_invalid_arguments 00:08:41.915 ************************************ 00:08:41.915 07:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.915 07:54:47 -- dd/negative_dd.sh@108 -- # run_test dd_double_input double_input 00:08:41.915 07:54:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:41.915 07:54:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:41.915 07:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.915 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:41.915 ************************************ 00:08:41.915 START TEST dd_double_input 00:08:41.915 ************************************ 00:08:41.915 07:54:47 -- common/autotest_common.sh@1104 -- # double_input 00:08:41.915 07:54:47 -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:41.915 07:54:47 -- common/autotest_common.sh@640 -- # local es=0 00:08:41.915 07:54:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:41.915 07:54:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.915 07:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:41.915 07:54:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.915 07:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:41.915 07:54:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.915 07:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:41.915 07:54:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:41.915 07:54:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:41.915 07:54:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:08:41.915 [2024-07-13 07:54:47.671078] spdk_dd.c:1467:main: *ERROR*: You may specify either --if or --ib, but not both. 00:08:41.915 07:54:47 -- common/autotest_common.sh@643 -- # es=22 00:08:41.915 07:54:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:41.915 07:54:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:41.915 07:54:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:41.915 00:08:41.915 real 0m0.065s 00:08:41.915 user 0m0.039s 00:08:41.915 sys 0m0.025s 00:08:41.915 07:54:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:41.915 ************************************ 00:08:41.915 END TEST dd_double_input 00:08:41.915 ************************************ 00:08:41.915 07:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:41.915 07:54:47 -- dd/negative_dd.sh@109 -- # run_test dd_double_output double_output 00:08:41.915 07:54:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:41.915 07:54:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:41.915 07:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.174 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:42.174 ************************************ 00:08:42.174 START TEST dd_double_output 00:08:42.174 ************************************ 00:08:42.174 07:54:47 -- common/autotest_common.sh@1104 -- # double_output 00:08:42.174 07:54:47 -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:42.174 07:54:47 -- common/autotest_common.sh@640 -- # local es=0 00:08:42.174 07:54:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:42.174 07:54:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.174 07:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.174 07:54:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.174 07:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.174 07:54:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.174 07:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.174 07:54:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.174 07:54:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.174 07:54:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:08:42.174 [2024-07-13 07:54:47.784005] spdk_dd.c:1473:main: *ERROR*: You may specify either --of or --ob, but not both. 00:08:42.174 07:54:47 -- common/autotest_common.sh@643 -- # es=22 00:08:42.174 07:54:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:42.174 07:54:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:42.174 07:54:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:42.174 00:08:42.174 real 0m0.065s 00:08:42.174 user 0m0.038s 00:08:42.174 sys 0m0.026s 00:08:42.174 07:54:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.174 ************************************ 00:08:42.174 07:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.174 END TEST dd_double_output 00:08:42.174 ************************************ 00:08:42.174 07:54:47 -- dd/negative_dd.sh@110 -- # run_test dd_no_input no_input 00:08:42.174 07:54:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:42.174 07:54:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.174 07:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.174 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:42.174 ************************************ 00:08:42.174 START TEST dd_no_input 00:08:42.174 ************************************ 00:08:42.174 07:54:47 -- common/autotest_common.sh@1104 -- # no_input 00:08:42.174 07:54:47 -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:42.174 07:54:47 -- common/autotest_common.sh@640 -- # local es=0 00:08:42.174 07:54:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:42.174 07:54:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.174 07:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.174 07:54:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.174 07:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.174 07:54:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.174 07:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.174 07:54:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.174 07:54:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.174 07:54:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:08:42.174 [2024-07-13 07:54:47.900992] spdk_dd.c:1479:main: *ERROR*: You must specify either --if or --ib 00:08:42.174 07:54:47 -- common/autotest_common.sh@643 -- # es=22 00:08:42.174 07:54:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:42.174 07:54:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:42.174 07:54:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:42.174 00:08:42.174 real 0m0.064s 00:08:42.174 user 0m0.041s 00:08:42.174 sys 0m0.023s 00:08:42.174 07:54:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.174 ************************************ 00:08:42.174 END TEST dd_no_input 00:08:42.174 ************************************ 00:08:42.174 07:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.174 07:54:47 -- dd/negative_dd.sh@111 -- # run_test dd_no_output no_output 00:08:42.174 07:54:47 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:42.174 07:54:47 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.174 07:54:47 -- common/autotest_common.sh@10 -- # set +x 00:08:42.175 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:42.175 ************************************ 00:08:42.175 START TEST dd_no_output 00:08:42.175 ************************************ 00:08:42.175 07:54:47 -- common/autotest_common.sh@1104 -- # no_output 00:08:42.175 07:54:47 -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:42.175 07:54:47 -- common/autotest_common.sh@640 -- # local es=0 00:08:42.175 07:54:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:42.175 07:54:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.175 07:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.175 07:54:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.175 07:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.175 07:54:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.175 07:54:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.175 07:54:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.175 07:54:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.175 07:54:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:42.433 [2024-07-13 07:54:48.018985] spdk_dd.c:1485:main: *ERROR*: You must specify either --of or --ob 00:08:42.434 07:54:48 -- common/autotest_common.sh@643 -- # es=22 00:08:42.434 07:54:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:42.434 07:54:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:42.434 07:54:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:42.434 00:08:42.434 real 0m0.067s 00:08:42.434 user 0m0.043s 00:08:42.434 sys 0m0.023s 00:08:42.434 07:54:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.434 07:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:42.434 ************************************ 00:08:42.434 END TEST dd_no_output 00:08:42.434 ************************************ 00:08:42.434 07:54:48 -- dd/negative_dd.sh@112 -- # run_test dd_wrong_blocksize wrong_blocksize 00:08:42.434 07:54:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:42.434 07:54:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.434 07:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:42.434 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:42.434 ************************************ 00:08:42.434 START TEST dd_wrong_blocksize 00:08:42.434 ************************************ 00:08:42.434 07:54:48 -- common/autotest_common.sh@1104 -- # wrong_blocksize 00:08:42.434 07:54:48 -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:42.434 07:54:48 -- common/autotest_common.sh@640 -- # local es=0 00:08:42.434 07:54:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:42.434 07:54:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.434 07:54:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.434 07:54:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.434 07:54:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.434 07:54:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.434 07:54:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.434 07:54:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.434 07:54:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.434 07:54:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:08:42.434 [2024-07-13 07:54:48.135567] spdk_dd.c:1491:main: *ERROR*: Invalid --bs value 00:08:42.434 07:54:48 -- common/autotest_common.sh@643 -- # es=22 00:08:42.434 07:54:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:42.434 07:54:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:42.434 07:54:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:42.434 00:08:42.434 real 0m0.065s 00:08:42.434 user 0m0.039s 00:08:42.434 sys 0m0.025s 00:08:42.434 07:54:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.434 07:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:42.434 ************************************ 00:08:42.434 END TEST dd_wrong_blocksize 00:08:42.434 ************************************ 00:08:42.434 07:54:48 -- dd/negative_dd.sh@113 -- # run_test dd_smaller_blocksize smaller_blocksize 00:08:42.434 07:54:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:42.434 07:54:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.434 07:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:42.434 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:42.434 ************************************ 00:08:42.434 START TEST dd_smaller_blocksize 00:08:42.434 ************************************ 00:08:42.434 07:54:48 -- common/autotest_common.sh@1104 -- # smaller_blocksize 00:08:42.434 07:54:48 -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:42.434 07:54:48 -- common/autotest_common.sh@640 -- # local es=0 00:08:42.434 07:54:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:42.434 07:54:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.434 07:54:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.434 07:54:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.434 07:54:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.434 07:54:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.434 07:54:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.434 07:54:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.434 07:54:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.434 07:54:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:08:42.693 [2024-07-13 07:54:48.251702] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:42.693 [2024-07-13 07:54:48.251815] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70283 ] 00:08:42.693 [2024-07-13 07:54:48.391171] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.693 [2024-07-13 07:54:48.431735] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.693 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:08:42.693 [2024-07-13 07:54:48.483302] spdk_dd.c:1168:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:08:42.693 [2024-07-13 07:54:48.483334] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:42.951 [2024-07-13 07:54:48.550872] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:42.951 07:54:48 -- common/autotest_common.sh@643 -- # es=244 00:08:42.951 07:54:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:42.951 07:54:48 -- common/autotest_common.sh@652 -- # es=116 00:08:42.951 07:54:48 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:42.951 07:54:48 -- common/autotest_common.sh@660 -- # es=1 00:08:42.951 07:54:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:42.951 00:08:42.951 real 0m0.422s 00:08:42.951 user 0m0.208s 00:08:42.951 sys 0m0.110s 00:08:42.951 ************************************ 00:08:42.951 END TEST dd_smaller_blocksize 00:08:42.951 ************************************ 00:08:42.951 07:54:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.951 07:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:42.951 07:54:48 -- dd/negative_dd.sh@114 -- # run_test dd_invalid_count invalid_count 00:08:42.951 07:54:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:42.951 07:54:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:42.951 07:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:42.951 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:42.951 ************************************ 00:08:42.951 START TEST dd_invalid_count 00:08:42.951 ************************************ 00:08:42.951 07:54:48 -- common/autotest_common.sh@1104 -- # invalid_count 00:08:42.952 07:54:48 -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:42.952 07:54:48 -- common/autotest_common.sh@640 -- # local es=0 00:08:42.952 07:54:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:42.952 07:54:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.952 07:54:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.952 07:54:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.952 07:54:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.952 07:54:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.952 07:54:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:42.952 07:54:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:42.952 07:54:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:42.952 07:54:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:08:42.952 [2024-07-13 07:54:48.726919] spdk_dd.c:1497:main: *ERROR*: Invalid --count value 00:08:42.952 07:54:48 -- common/autotest_common.sh@643 -- # es=22 00:08:42.952 07:54:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:42.952 07:54:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:42.952 07:54:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:42.952 00:08:42.952 real 0m0.065s 00:08:42.952 user 0m0.038s 00:08:42.952 sys 0m0.027s 00:08:42.952 07:54:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:42.952 07:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:42.952 ************************************ 00:08:42.952 END TEST dd_invalid_count 00:08:42.952 ************************************ 00:08:43.210 07:54:48 -- dd/negative_dd.sh@115 -- # run_test dd_invalid_oflag invalid_oflag 00:08:43.210 07:54:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:43.210 07:54:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:43.210 07:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:43.210 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:43.210 ************************************ 00:08:43.210 START TEST dd_invalid_oflag 00:08:43.210 ************************************ 00:08:43.210 07:54:48 -- common/autotest_common.sh@1104 -- # invalid_oflag 00:08:43.210 07:54:48 -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:43.210 07:54:48 -- common/autotest_common.sh@640 -- # local es=0 00:08:43.210 07:54:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:43.210 07:54:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.210 07:54:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:43.210 07:54:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.210 07:54:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:43.210 07:54:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.210 07:54:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:43.210 07:54:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.211 07:54:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:43.211 07:54:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:08:43.211 [2024-07-13 07:54:48.841358] spdk_dd.c:1503:main: *ERROR*: --oflags may be used only with --of 00:08:43.211 07:54:48 -- common/autotest_common.sh@643 -- # es=22 00:08:43.211 07:54:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:43.211 07:54:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:43.211 07:54:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:43.211 00:08:43.211 real 0m0.064s 00:08:43.211 user 0m0.043s 00:08:43.211 sys 0m0.020s 00:08:43.211 07:54:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.211 07:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:43.211 ************************************ 00:08:43.211 END TEST dd_invalid_oflag 00:08:43.211 ************************************ 00:08:43.211 07:54:48 -- dd/negative_dd.sh@116 -- # run_test dd_invalid_iflag invalid_iflag 00:08:43.211 07:54:48 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:43.211 07:54:48 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:43.211 07:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:43.211 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:43.211 ************************************ 00:08:43.211 START TEST dd_invalid_iflag 00:08:43.211 ************************************ 00:08:43.211 07:54:48 -- common/autotest_common.sh@1104 -- # invalid_iflag 00:08:43.211 07:54:48 -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:43.211 07:54:48 -- common/autotest_common.sh@640 -- # local es=0 00:08:43.211 07:54:48 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:43.211 07:54:48 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.211 07:54:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:43.211 07:54:48 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.211 07:54:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:43.211 07:54:48 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.211 07:54:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:43.211 07:54:48 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.211 07:54:48 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:43.211 07:54:48 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:08:43.211 [2024-07-13 07:54:48.958030] spdk_dd.c:1509:main: *ERROR*: --iflags may be used only with --if 00:08:43.211 07:54:48 -- common/autotest_common.sh@643 -- # es=22 00:08:43.211 07:54:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:43.211 07:54:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:08:43.211 07:54:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:43.211 00:08:43.211 real 0m0.065s 00:08:43.211 user 0m0.041s 00:08:43.211 sys 0m0.023s 00:08:43.211 07:54:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.211 07:54:48 -- common/autotest_common.sh@10 -- # set +x 00:08:43.211 ************************************ 00:08:43.211 END TEST dd_invalid_iflag 00:08:43.211 ************************************ 00:08:43.211 07:54:49 -- dd/negative_dd.sh@117 -- # run_test dd_unknown_flag unknown_flag 00:08:43.211 07:54:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:43.211 07:54:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:43.211 07:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:43.211 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:43.470 ************************************ 00:08:43.470 START TEST dd_unknown_flag 00:08:43.470 ************************************ 00:08:43.470 07:54:49 -- common/autotest_common.sh@1104 -- # unknown_flag 00:08:43.470 07:54:49 -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:43.470 07:54:49 -- common/autotest_common.sh@640 -- # local es=0 00:08:43.470 07:54:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:43.470 07:54:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.470 07:54:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:43.470 07:54:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.470 07:54:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:43.470 07:54:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.470 07:54:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:43.470 07:54:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.470 07:54:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:43.470 07:54:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:08:43.470 [2024-07-13 07:54:49.076100] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:43.470 [2024-07-13 07:54:49.076182] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70369 ] 00:08:43.470 [2024-07-13 07:54:49.212820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.470 [2024-07-13 07:54:49.253075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.730 [2024-07-13 07:54:49.304832] spdk_dd.c: 985:parse_flags: *ERROR*: Unknown file flag: -1 00:08:43.730 [2024-07-13 07:54:49.304899] spdk_dd.c: 893:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:08:43.730 [2024-07-13 07:54:49.304914] spdk_dd.c:1116:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1: Not a directory 00:08:43.730 [2024-07-13 07:54:49.304940] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.730 [2024-07-13 07:54:49.371591] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:43.730 07:54:49 -- common/autotest_common.sh@643 -- # es=236 00:08:43.730 07:54:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:43.730 07:54:49 -- common/autotest_common.sh@652 -- # es=108 00:08:43.730 07:54:49 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:43.730 07:54:49 -- common/autotest_common.sh@660 -- # es=1 00:08:43.730 07:54:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:43.730 00:08:43.730 real 0m0.420s 00:08:43.730 user 0m0.217s 00:08:43.730 sys 0m0.098s 00:08:43.730 07:54:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.730 ************************************ 00:08:43.730 END TEST dd_unknown_flag 00:08:43.730 ************************************ 00:08:43.730 07:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:43.730 07:54:49 -- dd/negative_dd.sh@118 -- # run_test dd_invalid_json invalid_json 00:08:43.730 07:54:49 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:43.730 07:54:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:43.730 07:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:43.730 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:43.730 ************************************ 00:08:43.730 START TEST dd_invalid_json 00:08:43.730 ************************************ 00:08:43.730 07:54:49 -- common/autotest_common.sh@1104 -- # invalid_json 00:08:43.730 07:54:49 -- dd/negative_dd.sh@95 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:43.730 07:54:49 -- common/autotest_common.sh@640 -- # local es=0 00:08:43.730 07:54:49 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:43.730 07:54:49 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.730 07:54:49 -- dd/negative_dd.sh@95 -- # : 00:08:43.730 07:54:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:43.730 07:54:49 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.730 07:54:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:43.730 07:54:49 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.730 07:54:49 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:08:43.730 07:54:49 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:43.730 07:54:49 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:43.730 07:54:49 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:08:43.990 [2024-07-13 07:54:49.545141] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:43.990 [2024-07-13 07:54:49.545228] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70397 ] 00:08:43.990 [2024-07-13 07:54:49.683668] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.990 [2024-07-13 07:54:49.723628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.990 [2024-07-13 07:54:49.723768] json_config.c: 529:app_json_config_read: *ERROR*: Parsing JSON configuration failed (-2) 00:08:43.990 [2024-07-13 07:54:49.723820] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:43.990 [2024-07-13 07:54:49.723877] spdk_dd.c:1516:main: *ERROR*: Error occurred while performing copy 00:08:43.990 07:54:49 -- common/autotest_common.sh@643 -- # es=234 00:08:43.990 07:54:49 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:08:43.990 07:54:49 -- common/autotest_common.sh@652 -- # es=106 00:08:43.990 07:54:49 -- common/autotest_common.sh@653 -- # case "$es" in 00:08:43.990 07:54:49 -- common/autotest_common.sh@660 -- # es=1 00:08:43.990 07:54:49 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:08:43.990 00:08:43.990 real 0m0.298s 00:08:43.990 user 0m0.144s 00:08:43.990 sys 0m0.053s 00:08:43.990 07:54:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:43.990 ************************************ 00:08:43.990 END TEST dd_invalid_json 00:08:43.990 ************************************ 00:08:43.990 07:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.249 ************************************ 00:08:44.249 END TEST spdk_dd_negative 00:08:44.249 ************************************ 00:08:44.249 00:08:44.249 real 0m2.432s 00:08:44.249 user 0m1.165s 00:08:44.249 sys 0m0.903s 00:08:44.249 07:54:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.249 07:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.249 ************************************ 00:08:44.249 END TEST spdk_dd 00:08:44.249 ************************************ 00:08:44.249 00:08:44.249 real 1m1.040s 00:08:44.249 user 0m36.978s 00:08:44.249 sys 0m14.843s 00:08:44.249 07:54:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:44.249 07:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.250 07:54:49 -- spdk/autotest.sh@217 -- # '[' 0 -eq 1 ']' 00:08:44.250 07:54:49 -- spdk/autotest.sh@264 -- # '[' 0 -eq 1 ']' 00:08:44.250 07:54:49 -- spdk/autotest.sh@268 -- # timing_exit lib 00:08:44.250 07:54:49 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:44.250 07:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.250 07:54:49 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:08:44.250 07:54:49 -- spdk/autotest.sh@278 -- # '[' 0 -eq 1 ']' 00:08:44.250 07:54:49 -- spdk/autotest.sh@287 -- # '[' 1 -eq 1 ']' 00:08:44.250 07:54:49 -- spdk/autotest.sh@288 -- # export NET_TYPE 00:08:44.250 07:54:49 -- spdk/autotest.sh@291 -- # '[' tcp = rdma ']' 00:08:44.250 07:54:49 -- spdk/autotest.sh@294 -- # '[' tcp = tcp ']' 00:08:44.250 07:54:49 -- spdk/autotest.sh@295 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:44.250 07:54:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:44.250 07:54:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.250 07:54:49 -- common/autotest_common.sh@10 -- # set +x 00:08:44.250 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:44.250 ************************************ 00:08:44.250 START TEST nvmf_tcp 00:08:44.250 ************************************ 00:08:44.250 07:54:49 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:08:44.250 * Looking for test storage... 00:08:44.250 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:08:44.250 07:54:50 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:44.250 07:54:50 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:44.250 07:54:50 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:44.250 07:54:50 -- nvmf/common.sh@7 -- # uname -s 00:08:44.250 07:54:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.250 07:54:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.250 07:54:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.250 07:54:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.250 07:54:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.250 07:54:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.250 07:54:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.250 07:54:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.250 07:54:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.250 07:54:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.250 07:54:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:08:44.250 07:54:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:08:44.250 07:54:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.250 07:54:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.250 07:54:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:44.250 07:54:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:44.250 07:54:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.250 07:54:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.250 07:54:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.250 07:54:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.250 07:54:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.250 07:54:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.250 07:54:50 -- paths/export.sh@5 -- # export PATH 00:08:44.250 07:54:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.250 07:54:50 -- nvmf/common.sh@46 -- # : 0 00:08:44.250 07:54:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:44.250 07:54:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:44.250 07:54:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:44.250 07:54:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.250 07:54:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.250 07:54:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:44.250 07:54:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:44.250 07:54:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:44.510 07:54:50 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:44.510 07:54:50 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:44.510 07:54:50 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:44.510 07:54:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:44.510 07:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:44.510 07:54:50 -- nvmf/nvmf.sh@22 -- # [[ 1 -eq 0 ]] 00:08:44.510 07:54:50 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:44.510 07:54:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:44.510 07:54:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:44.510 07:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:44.510 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:44.510 ************************************ 00:08:44.510 START TEST nvmf_host_management 00:08:44.510 ************************************ 00:08:44.510 07:54:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:44.510 * Looking for test storage... 00:08:44.510 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:44.510 07:54:50 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:44.510 07:54:50 -- nvmf/common.sh@7 -- # uname -s 00:08:44.510 07:54:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:44.510 07:54:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:44.510 07:54:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:44.510 07:54:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:44.510 07:54:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:44.510 07:54:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:44.510 07:54:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:44.510 07:54:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:44.510 07:54:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:44.510 07:54:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:44.510 07:54:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:08:44.510 07:54:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:08:44.510 07:54:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:44.510 07:54:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:44.510 07:54:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:44.510 07:54:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:44.510 07:54:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:44.510 07:54:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:44.510 07:54:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:44.510 07:54:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.510 07:54:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.510 07:54:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.510 07:54:50 -- paths/export.sh@5 -- # export PATH 00:08:44.510 07:54:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:44.510 07:54:50 -- nvmf/common.sh@46 -- # : 0 00:08:44.510 07:54:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:44.510 07:54:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:44.510 07:54:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:44.510 07:54:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:44.510 07:54:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:44.510 07:54:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:44.510 07:54:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:44.510 07:54:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:44.510 07:54:50 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:44.510 07:54:50 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:44.510 07:54:50 -- target/host_management.sh@104 -- # nvmftestinit 00:08:44.510 07:54:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:44.510 07:54:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:44.510 07:54:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:44.510 07:54:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:44.510 07:54:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:44.510 07:54:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.510 07:54:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:44.510 07:54:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:44.510 07:54:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:44.510 07:54:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:44.510 07:54:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:44.510 07:54:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:44.510 07:54:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:44.510 07:54:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:44.510 07:54:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:44.510 07:54:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:44.510 07:54:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:44.510 07:54:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:44.511 07:54:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:44.511 07:54:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:44.511 07:54:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:44.511 07:54:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:44.511 07:54:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:44.511 07:54:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:44.511 07:54:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:44.511 07:54:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:44.511 07:54:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:44.511 Cannot find device "nvmf_init_br" 00:08:44.511 07:54:50 -- nvmf/common.sh@153 -- # true 00:08:44.511 07:54:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:44.511 Cannot find device "nvmf_tgt_br" 00:08:44.511 07:54:50 -- nvmf/common.sh@154 -- # true 00:08:44.511 07:54:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:44.511 Cannot find device "nvmf_tgt_br2" 00:08:44.511 07:54:50 -- nvmf/common.sh@155 -- # true 00:08:44.511 07:54:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:44.511 Cannot find device "nvmf_init_br" 00:08:44.511 07:54:50 -- nvmf/common.sh@156 -- # true 00:08:44.511 07:54:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:44.511 Cannot find device "nvmf_tgt_br" 00:08:44.511 07:54:50 -- nvmf/common.sh@157 -- # true 00:08:44.511 07:54:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:44.511 Cannot find device "nvmf_tgt_br2" 00:08:44.511 07:54:50 -- nvmf/common.sh@158 -- # true 00:08:44.511 07:54:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:44.511 Cannot find device "nvmf_br" 00:08:44.511 07:54:50 -- nvmf/common.sh@159 -- # true 00:08:44.511 07:54:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:44.511 Cannot find device "nvmf_init_if" 00:08:44.511 07:54:50 -- nvmf/common.sh@160 -- # true 00:08:44.511 07:54:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:44.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.511 07:54:50 -- nvmf/common.sh@161 -- # true 00:08:44.511 07:54:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:44.511 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:44.511 07:54:50 -- nvmf/common.sh@162 -- # true 00:08:44.511 07:54:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:44.511 07:54:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:44.511 07:54:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:44.511 07:54:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:44.511 07:54:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:44.770 07:54:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:44.770 07:54:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:44.770 07:54:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:44.770 07:54:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:44.770 07:54:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:44.770 07:54:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:44.770 07:54:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:44.770 07:54:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:44.770 07:54:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:44.770 07:54:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:44.770 07:54:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:44.770 07:54:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:44.770 07:54:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:44.770 07:54:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:44.770 07:54:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:44.770 07:54:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:44.770 07:54:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:44.770 07:54:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:44.770 07:54:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:44.770 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:44.770 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:08:44.770 00:08:44.770 --- 10.0.0.2 ping statistics --- 00:08:44.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.770 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:08:44.770 07:54:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:44.770 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:44.770 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:08:44.770 00:08:44.770 --- 10.0.0.3 ping statistics --- 00:08:44.770 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:44.770 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:08:44.770 07:54:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:45.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:45.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:08:45.029 00:08:45.029 --- 10.0.0.1 ping statistics --- 00:08:45.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:45.029 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:08:45.029 07:54:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:45.029 07:54:50 -- nvmf/common.sh@421 -- # return 0 00:08:45.029 07:54:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:45.029 07:54:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:45.029 07:54:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:45.029 07:54:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:45.029 07:54:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:45.029 07:54:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:45.029 07:54:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:45.029 07:54:50 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:08:45.029 07:54:50 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:08:45.030 07:54:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:45.030 07:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:45.030 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:45.030 ************************************ 00:08:45.030 START TEST nvmf_host_management 00:08:45.030 ************************************ 00:08:45.030 07:54:50 -- common/autotest_common.sh@1104 -- # nvmf_host_management 00:08:45.030 07:54:50 -- target/host_management.sh@69 -- # starttarget 00:08:45.030 07:54:50 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:45.030 07:54:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:45.030 07:54:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:45.030 07:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:45.030 07:54:50 -- nvmf/common.sh@469 -- # nvmfpid=70653 00:08:45.030 07:54:50 -- nvmf/common.sh@470 -- # waitforlisten 70653 00:08:45.030 07:54:50 -- common/autotest_common.sh@819 -- # '[' -z 70653 ']' 00:08:45.030 07:54:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:45.030 07:54:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.030 07:54:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:45.030 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.030 07:54:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.030 07:54:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:45.030 07:54:50 -- common/autotest_common.sh@10 -- # set +x 00:08:45.030 [2024-07-13 07:54:50.682881] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:45.030 [2024-07-13 07:54:50.682993] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.030 [2024-07-13 07:54:50.825689] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:45.288 [2024-07-13 07:54:50.868705] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:45.288 [2024-07-13 07:54:50.869093] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:45.288 [2024-07-13 07:54:50.869204] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:45.288 [2024-07-13 07:54:50.869289] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:45.288 [2024-07-13 07:54:50.869909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:45.288 [2024-07-13 07:54:50.869992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:45.288 [2024-07-13 07:54:50.870113] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:08:45.288 [2024-07-13 07:54:50.870119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:45.856 07:54:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:45.856 07:54:51 -- common/autotest_common.sh@852 -- # return 0 00:08:45.856 07:54:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:45.856 07:54:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:45.856 07:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.114 07:54:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.114 07:54:51 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:46.114 07:54:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.114 07:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.114 [2024-07-13 07:54:51.678703] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.114 07:54:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.114 07:54:51 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:46.114 07:54:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:46.114 07:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.114 07:54:51 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:46.114 07:54:51 -- target/host_management.sh@23 -- # cat 00:08:46.114 07:54:51 -- target/host_management.sh@30 -- # rpc_cmd 00:08:46.114 07:54:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.114 07:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.114 Malloc0 00:08:46.114 [2024-07-13 07:54:51.747448] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.114 07:54:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:46.114 07:54:51 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:46.114 07:54:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:46.114 07:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:46.114 07:54:51 -- target/host_management.sh@73 -- # perfpid=70701 00:08:46.114 07:54:51 -- target/host_management.sh@74 -- # waitforlisten 70701 /var/tmp/bdevperf.sock 00:08:46.114 07:54:51 -- common/autotest_common.sh@819 -- # '[' -z 70701 ']' 00:08:46.114 07:54:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:46.114 07:54:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:46.114 07:54:51 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:46.114 07:54:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:46.114 07:54:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:46.114 07:54:51 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:46.114 07:54:51 -- common/autotest_common.sh@10 -- # set +x 00:08:46.114 07:54:51 -- nvmf/common.sh@520 -- # config=() 00:08:46.114 07:54:51 -- nvmf/common.sh@520 -- # local subsystem config 00:08:46.114 07:54:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:46.114 07:54:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:46.114 { 00:08:46.114 "params": { 00:08:46.114 "name": "Nvme$subsystem", 00:08:46.114 "trtype": "$TEST_TRANSPORT", 00:08:46.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.114 "adrfam": "ipv4", 00:08:46.114 "trsvcid": "$NVMF_PORT", 00:08:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.114 "hdgst": ${hdgst:-false}, 00:08:46.114 "ddgst": ${ddgst:-false} 00:08:46.114 }, 00:08:46.114 "method": "bdev_nvme_attach_controller" 00:08:46.114 } 00:08:46.114 EOF 00:08:46.114 )") 00:08:46.114 07:54:51 -- nvmf/common.sh@542 -- # cat 00:08:46.114 07:54:51 -- nvmf/common.sh@544 -- # jq . 00:08:46.114 07:54:51 -- nvmf/common.sh@545 -- # IFS=, 00:08:46.114 07:54:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:46.114 "params": { 00:08:46.114 "name": "Nvme0", 00:08:46.114 "trtype": "tcp", 00:08:46.114 "traddr": "10.0.0.2", 00:08:46.114 "adrfam": "ipv4", 00:08:46.114 "trsvcid": "4420", 00:08:46.114 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:46.114 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:46.114 "hdgst": false, 00:08:46.114 "ddgst": false 00:08:46.114 }, 00:08:46.114 "method": "bdev_nvme_attach_controller" 00:08:46.114 }' 00:08:46.114 [2024-07-13 07:54:51.844756] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:46.114 [2024-07-13 07:54:51.844861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70701 ] 00:08:46.372 [2024-07-13 07:54:51.985628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.372 [2024-07-13 07:54:52.025068] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.372 Running I/O for 10 seconds... 00:08:46.937 07:54:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:46.937 07:54:52 -- common/autotest_common.sh@852 -- # return 0 00:08:46.937 07:54:52 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:46.937 07:54:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:46.937 07:54:52 -- common/autotest_common.sh@10 -- # set +x 00:08:47.199 07:54:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.199 07:54:52 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:47.199 07:54:52 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:47.199 07:54:52 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:47.199 07:54:52 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:47.199 07:54:52 -- target/host_management.sh@52 -- # local ret=1 00:08:47.199 07:54:52 -- target/host_management.sh@53 -- # local i 00:08:47.199 07:54:52 -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:47.199 07:54:52 -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:47.199 07:54:52 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:47.199 07:54:52 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:47.199 07:54:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.199 07:54:52 -- common/autotest_common.sh@10 -- # set +x 00:08:47.199 07:54:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.199 07:54:52 -- target/host_management.sh@55 -- # read_io_count=1666 00:08:47.199 07:54:52 -- target/host_management.sh@58 -- # '[' 1666 -ge 100 ']' 00:08:47.199 07:54:52 -- target/host_management.sh@59 -- # ret=0 00:08:47.199 07:54:52 -- target/host_management.sh@60 -- # break 00:08:47.199 07:54:52 -- target/host_management.sh@64 -- # return 0 00:08:47.199 07:54:52 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:47.199 07:54:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.199 07:54:52 -- common/autotest_common.sh@10 -- # set +x 00:08:47.199 [2024-07-13 07:54:52.821923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.821971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.821983] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.821993] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822002] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822010] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822036] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822045] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822058] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822072] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822086] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822100] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822109] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822117] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822126] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822134] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822160] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822185] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822250] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822258] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822265] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822272] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xafedb0 is same with the state(5) to be set 00:08:47.199 [2024-07-13 07:54:52.822355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.199 [2024-07-13 07:54:52.822384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.199 [2024-07-13 07:54:52.822405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:95488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.199 [2024-07-13 07:54:52.822415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.199 [2024-07-13 07:54:52.822427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.199 [2024-07-13 07:54:52.822436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.199 [2024-07-13 07:54:52.822447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.199 [2024-07-13 07:54:52.822456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.199 [2024-07-13 07:54:52.822468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.199 [2024-07-13 07:54:52.822477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.199 [2024-07-13 07:54:52.822488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.199 [2024-07-13 07:54:52.822497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.199 [2024-07-13 07:54:52.822508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.199 [2024-07-13 07:54:52.822517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.199 [2024-07-13 07:54:52.822528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.199 [2024-07-13 07:54:52.822537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.199 [2024-07-13 07:54:52.822547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.199 [2024-07-13 07:54:52.822556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.199 [2024-07-13 07:54:52.822567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.199 [2024-07-13 07:54:52.822576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.199 [2024-07-13 07:54:52.822587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.199 [2024-07-13 07:54:52.822595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.199 [2024-07-13 07:54:52.822606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.199 [2024-07-13 07:54:52.822615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.199 [2024-07-13 07:54:52.822626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.199 [2024-07-13 07:54:52.822635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.199 [2024-07-13 07:54:52.822646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.199 [2024-07-13 07:54:52.822655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.199 [2024-07-13 07:54:52.822671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.199 [2024-07-13 07:54:52.822680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.199 [2024-07-13 07:54:52.822691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.199 [2024-07-13 07:54:52.822700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.199 [2024-07-13 07:54:52.822711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.822720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.822730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.822739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.822750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.822759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.822770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.822825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.822839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.822849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.822861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.822871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.822882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.822892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.822904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.822914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.822926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.822935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.822947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.822957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.822968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.822978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.822990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.200 [2024-07-13 07:54:52.823702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.200 [2024-07-13 07:54:52.823711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.201 [2024-07-13 07:54:52.823722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.201 [2024-07-13 07:54:52.823731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.201 [2024-07-13 07:54:52.823742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.201 [2024-07-13 07:54:52.823752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.201 [2024-07-13 07:54:52.823763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.201 [2024-07-13 07:54:52.823779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.201 [2024-07-13 07:54:52.823824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.201 [2024-07-13 07:54:52.823834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.201 [2024-07-13 07:54:52.823848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.201 [2024-07-13 07:54:52.823859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.201 [2024-07-13 07:54:52.823871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:47.201 [2024-07-13 07:54:52.823881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.201 [2024-07-13 07:54:52.823892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x982090 is same with the state(5) to be set 00:08:47.201 [2024-07-13 07:54:52.823943] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x982090 was disconnected and freed. reset controller. 00:08:47.201 [2024-07-13 07:54:52.825075] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:47.201 task offset: 100608 on job bdev=Nvme0n1 fails 00:08:47.201 00:08:47.201 Latency(us) 00:08:47.201 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:47.201 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:47.201 Job: Nvme0n1 ended in about 0.66 seconds with error 00:08:47.201 Verification LBA range: start 0x0 length 0x400 00:08:47.201 Nvme0n1 : 0.66 2694.14 168.38 96.43 0.00 22530.17 5213.09 31933.91 00:08:47.201 =================================================================================================================== 00:08:47.201 Total : 2694.14 168.38 96.43 0.00 22530.17 5213.09 31933.91 00:08:47.201 [2024-07-13 07:54:52.827099] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:47.201 [2024-07-13 07:54:52.827128] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x945bc0 (9): Bad file descriptor 00:08:47.201 07:54:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.201 [2024-07-13 07:54:52.827940] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:08:47.201 07:54:52 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:47.201 [2024-07-13 07:54:52.828134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:08:47.201 [2024-07-13 07:54:52.828178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:47.201 [2024-07-13 07:54:52.828196] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:08:47.201 [2024-07-13 07:54:52.828207] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:08:47.201 [2024-07-13 07:54:52.828216] nvme_tcp.c:2341:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:08:47.201 07:54:52 -- common/autotest_common.sh@551 -- # xtrace_disable 00:08:47.201 [2024-07-13 07:54:52.828225] nvme_tcp.c:2138:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x945bc0 00:08:47.201 [2024-07-13 07:54:52.828257] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x945bc0 (9): Bad file descriptor 00:08:47.201 [2024-07-13 07:54:52.828274] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:08:47.201 [2024-07-13 07:54:52.828284] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:08:47.201 [2024-07-13 07:54:52.828294] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:08:47.201 [2024-07-13 07:54:52.828310] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:08:47.201 07:54:52 -- common/autotest_common.sh@10 -- # set +x 00:08:47.201 07:54:52 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:08:47.201 07:54:52 -- target/host_management.sh@87 -- # sleep 1 00:08:48.138 07:54:53 -- target/host_management.sh@91 -- # kill -9 70701 00:08:48.138 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (70701) - No such process 00:08:48.138 07:54:53 -- target/host_management.sh@91 -- # true 00:08:48.138 07:54:53 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:48.138 07:54:53 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:48.138 07:54:53 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:48.138 07:54:53 -- nvmf/common.sh@520 -- # config=() 00:08:48.138 07:54:53 -- nvmf/common.sh@520 -- # local subsystem config 00:08:48.138 07:54:53 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:08:48.138 07:54:53 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:08:48.138 { 00:08:48.138 "params": { 00:08:48.138 "name": "Nvme$subsystem", 00:08:48.138 "trtype": "$TEST_TRANSPORT", 00:08:48.138 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:48.138 "adrfam": "ipv4", 00:08:48.138 "trsvcid": "$NVMF_PORT", 00:08:48.138 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:48.138 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:48.138 "hdgst": ${hdgst:-false}, 00:08:48.138 "ddgst": ${ddgst:-false} 00:08:48.138 }, 00:08:48.138 "method": "bdev_nvme_attach_controller" 00:08:48.138 } 00:08:48.138 EOF 00:08:48.138 )") 00:08:48.138 07:54:53 -- nvmf/common.sh@542 -- # cat 00:08:48.138 07:54:53 -- nvmf/common.sh@544 -- # jq . 00:08:48.138 07:54:53 -- nvmf/common.sh@545 -- # IFS=, 00:08:48.138 07:54:53 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:08:48.138 "params": { 00:08:48.138 "name": "Nvme0", 00:08:48.138 "trtype": "tcp", 00:08:48.138 "traddr": "10.0.0.2", 00:08:48.138 "adrfam": "ipv4", 00:08:48.138 "trsvcid": "4420", 00:08:48.138 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:48.138 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:48.138 "hdgst": false, 00:08:48.138 "ddgst": false 00:08:48.138 }, 00:08:48.138 "method": "bdev_nvme_attach_controller" 00:08:48.138 }' 00:08:48.138 [2024-07-13 07:54:53.891416] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:48.138 [2024-07-13 07:54:53.891502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70727 ] 00:08:48.397 [2024-07-13 07:54:54.029513] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:48.397 [2024-07-13 07:54:54.068082] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.397 Running I/O for 1 seconds... 00:08:49.774 00:08:49.774 Latency(us) 00:08:49.774 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:49.774 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:49.774 Verification LBA range: start 0x0 length 0x400 00:08:49.774 Nvme0n1 : 1.01 2940.53 183.78 0.00 0.00 21408.71 1154.33 31457.28 00:08:49.774 =================================================================================================================== 00:08:49.774 Total : 2940.53 183.78 0.00 0.00 21408.71 1154.33 31457.28 00:08:49.774 07:54:55 -- target/host_management.sh@101 -- # stoptarget 00:08:49.774 07:54:55 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:49.774 07:54:55 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:08:49.774 07:54:55 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:08:49.774 07:54:55 -- target/host_management.sh@40 -- # nvmftestfini 00:08:49.774 07:54:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:49.774 07:54:55 -- nvmf/common.sh@116 -- # sync 00:08:49.774 07:54:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:08:49.774 07:54:55 -- nvmf/common.sh@119 -- # set +e 00:08:49.774 07:54:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:49.774 07:54:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:08:49.774 rmmod nvme_tcp 00:08:49.774 rmmod nvme_fabrics 00:08:49.774 rmmod nvme_keyring 00:08:49.774 07:54:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:49.774 07:54:55 -- nvmf/common.sh@123 -- # set -e 00:08:49.774 07:54:55 -- nvmf/common.sh@124 -- # return 0 00:08:49.774 07:54:55 -- nvmf/common.sh@477 -- # '[' -n 70653 ']' 00:08:49.774 07:54:55 -- nvmf/common.sh@478 -- # killprocess 70653 00:08:49.774 07:54:55 -- common/autotest_common.sh@926 -- # '[' -z 70653 ']' 00:08:49.774 07:54:55 -- common/autotest_common.sh@930 -- # kill -0 70653 00:08:49.774 07:54:55 -- common/autotest_common.sh@931 -- # uname 00:08:49.774 07:54:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:08:49.774 07:54:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70653 00:08:49.774 07:54:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:08:49.774 killing process with pid 70653 00:08:49.774 07:54:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:08:49.774 07:54:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70653' 00:08:49.774 07:54:55 -- common/autotest_common.sh@945 -- # kill 70653 00:08:49.774 07:54:55 -- common/autotest_common.sh@950 -- # wait 70653 00:08:50.033 [2024-07-13 07:54:55.632640] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:50.033 07:54:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:50.033 07:54:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:08:50.033 07:54:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:08:50.033 07:54:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:50.033 07:54:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:08:50.033 07:54:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.033 07:54:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.033 07:54:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.033 07:54:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:08:50.033 00:08:50.033 real 0m5.075s 00:08:50.033 user 0m21.463s 00:08:50.033 sys 0m1.081s 00:08:50.033 ************************************ 00:08:50.033 END TEST nvmf_host_management 00:08:50.033 ************************************ 00:08:50.033 07:54:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.033 07:54:55 -- common/autotest_common.sh@10 -- # set +x 00:08:50.033 07:54:55 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:08:50.033 ************************************ 00:08:50.033 END TEST nvmf_host_management 00:08:50.033 ************************************ 00:08:50.033 00:08:50.033 real 0m5.666s 00:08:50.033 user 0m21.601s 00:08:50.033 sys 0m1.298s 00:08:50.033 07:54:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:50.033 07:54:55 -- common/autotest_common.sh@10 -- # set +x 00:08:50.033 07:54:55 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:50.033 07:54:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:08:50.033 07:54:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:08:50.033 07:54:55 -- common/autotest_common.sh@10 -- # set +x 00:08:50.033 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:08:50.033 ************************************ 00:08:50.033 START TEST nvmf_lvol 00:08:50.033 ************************************ 00:08:50.033 07:54:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:50.293 * Looking for test storage... 00:08:50.293 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:08:50.293 07:54:55 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:08:50.293 07:54:55 -- nvmf/common.sh@7 -- # uname -s 00:08:50.293 07:54:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:50.293 07:54:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:50.293 07:54:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:50.293 07:54:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:50.293 07:54:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:50.293 07:54:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:50.293 07:54:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:50.293 07:54:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:50.293 07:54:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:50.293 07:54:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:50.293 07:54:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:08:50.293 07:54:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:08:50.293 07:54:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:50.293 07:54:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:50.293 07:54:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:08:50.293 07:54:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:50.293 07:54:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:50.293 07:54:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:50.293 07:54:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:50.293 07:54:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.293 07:54:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.293 07:54:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.293 07:54:55 -- paths/export.sh@5 -- # export PATH 00:08:50.293 07:54:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:50.293 07:54:55 -- nvmf/common.sh@46 -- # : 0 00:08:50.293 07:54:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:50.293 07:54:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:50.293 07:54:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:50.293 07:54:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:50.293 07:54:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:50.293 07:54:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:50.293 07:54:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:50.293 07:54:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:50.293 07:54:55 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:50.293 07:54:55 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:50.293 07:54:55 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:50.293 07:54:55 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:50.293 07:54:55 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:50.293 07:54:55 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:50.293 07:54:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:08:50.293 07:54:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:50.293 07:54:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:50.293 07:54:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:50.293 07:54:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:50.293 07:54:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:50.293 07:54:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:50.293 07:54:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:50.293 07:54:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:08:50.293 07:54:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:08:50.293 07:54:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:08:50.294 07:54:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:08:50.294 07:54:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:08:50.294 07:54:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:08:50.294 07:54:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:50.294 07:54:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:50.294 07:54:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:08:50.294 07:54:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:08:50.294 07:54:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:08:50.294 07:54:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:08:50.294 07:54:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:08:50.294 07:54:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:50.294 07:54:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:08:50.294 07:54:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:08:50.294 07:54:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:08:50.294 07:54:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:08:50.294 07:54:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:08:50.294 07:54:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:08:50.294 Cannot find device "nvmf_tgt_br" 00:08:50.294 07:54:55 -- nvmf/common.sh@154 -- # true 00:08:50.294 07:54:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:08:50.294 Cannot find device "nvmf_tgt_br2" 00:08:50.294 07:54:55 -- nvmf/common.sh@155 -- # true 00:08:50.294 07:54:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:08:50.294 07:54:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:08:50.294 Cannot find device "nvmf_tgt_br" 00:08:50.294 07:54:55 -- nvmf/common.sh@157 -- # true 00:08:50.294 07:54:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:08:50.294 Cannot find device "nvmf_tgt_br2" 00:08:50.294 07:54:55 -- nvmf/common.sh@158 -- # true 00:08:50.294 07:54:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:08:50.294 07:54:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:08:50.294 07:54:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:08:50.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.294 07:54:56 -- nvmf/common.sh@161 -- # true 00:08:50.294 07:54:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:08:50.294 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:08:50.294 07:54:56 -- nvmf/common.sh@162 -- # true 00:08:50.294 07:54:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:08:50.294 07:54:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:08:50.294 07:54:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:08:50.294 07:54:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:08:50.294 07:54:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:08:50.294 07:54:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:08:50.294 07:54:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:08:50.294 07:54:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:08:50.294 07:54:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:08:50.294 07:54:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:08:50.294 07:54:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:08:50.294 07:54:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:08:50.294 07:54:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:08:50.553 07:54:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:08:50.553 07:54:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:08:50.553 07:54:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:08:50.553 07:54:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:08:50.553 07:54:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:08:50.553 07:54:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:08:50.553 07:54:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:08:50.553 07:54:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:08:50.553 07:54:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:08:50.553 07:54:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:08:50.553 07:54:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:08:50.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:50.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:08:50.553 00:08:50.553 --- 10.0.0.2 ping statistics --- 00:08:50.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.553 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:08:50.553 07:54:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:08:50.553 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:08:50.553 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.047 ms 00:08:50.553 00:08:50.553 --- 10.0.0.3 ping statistics --- 00:08:50.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.553 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:08:50.553 07:54:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:08:50.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:50.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:08:50.553 00:08:50.553 --- 10.0.0.1 ping statistics --- 00:08:50.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:50.553 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:08:50.553 07:54:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:50.553 07:54:56 -- nvmf/common.sh@421 -- # return 0 00:08:50.553 07:54:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:50.553 07:54:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:50.553 07:54:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:08:50.553 07:54:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:08:50.553 07:54:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:50.553 07:54:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:08:50.553 07:54:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:08:50.553 07:54:56 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:50.553 07:54:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:50.553 07:54:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:08:50.553 07:54:56 -- common/autotest_common.sh@10 -- # set +x 00:08:50.553 07:54:56 -- nvmf/common.sh@469 -- # nvmfpid=70938 00:08:50.553 07:54:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:50.553 07:54:56 -- nvmf/common.sh@470 -- # waitforlisten 70938 00:08:50.553 07:54:56 -- common/autotest_common.sh@819 -- # '[' -z 70938 ']' 00:08:50.553 07:54:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.553 07:54:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:08:50.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.553 07:54:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.553 07:54:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:08:50.553 07:54:56 -- common/autotest_common.sh@10 -- # set +x 00:08:50.553 [2024-07-13 07:54:56.272976] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:08:50.553 [2024-07-13 07:54:56.273073] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:50.813 [2024-07-13 07:54:56.413102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:50.813 [2024-07-13 07:54:56.453026] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:50.813 [2024-07-13 07:54:56.453186] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.813 [2024-07-13 07:54:56.453201] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.813 [2024-07-13 07:54:56.453212] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.813 [2024-07-13 07:54:56.453385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.813 [2024-07-13 07:54:56.454218] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.813 [2024-07-13 07:54:56.454295] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.750 07:54:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:08:51.750 07:54:57 -- common/autotest_common.sh@852 -- # return 0 00:08:51.750 07:54:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:51.750 07:54:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:08:51.750 07:54:57 -- common/autotest_common.sh@10 -- # set +x 00:08:51.750 07:54:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:51.750 07:54:57 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:52.009 [2024-07-13 07:54:57.566004] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.009 07:54:57 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.268 07:54:57 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:52.268 07:54:57 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:52.527 07:54:58 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:52.527 07:54:58 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:52.527 07:54:58 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:52.786 07:54:58 -- target/nvmf_lvol.sh@29 -- # lvs=9b70a789-8f25-4b66-8e5b-8a68ff73011c 00:08:52.786 07:54:58 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 9b70a789-8f25-4b66-8e5b-8a68ff73011c lvol 20 00:08:53.045 07:54:58 -- target/nvmf_lvol.sh@32 -- # lvol=34958017-70d3-4ac7-9a98-f8bbbcc9c802 00:08:53.045 07:54:58 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:53.304 07:54:59 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 34958017-70d3-4ac7-9a98-f8bbbcc9c802 00:08:53.563 07:54:59 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:53.822 [2024-07-13 07:54:59.525672] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.822 07:54:59 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:54.080 07:54:59 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:54.080 07:54:59 -- target/nvmf_lvol.sh@42 -- # perf_pid=70990 00:08:54.080 07:54:59 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:55.013 07:55:00 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot 34958017-70d3-4ac7-9a98-f8bbbcc9c802 MY_SNAPSHOT 00:08:55.272 07:55:01 -- target/nvmf_lvol.sh@47 -- # snapshot=ea0bf6dc-aac2-48a4-9251-bb03a7174085 00:08:55.272 07:55:01 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize 34958017-70d3-4ac7-9a98-f8bbbcc9c802 30 00:08:55.530 07:55:01 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone ea0bf6dc-aac2-48a4-9251-bb03a7174085 MY_CLONE 00:08:56.098 07:55:01 -- target/nvmf_lvol.sh@49 -- # clone=044ec359-9540-48a8-925d-5f4fffbc0239 00:08:56.098 07:55:01 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 044ec359-9540-48a8-925d-5f4fffbc0239 00:08:56.381 07:55:02 -- target/nvmf_lvol.sh@53 -- # wait 70990 00:09:04.488 Initializing NVMe Controllers 00:09:04.488 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:04.488 Controller IO queue size 128, less than required. 00:09:04.488 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:04.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:04.488 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:04.488 Initialization complete. Launching workers. 00:09:04.488 ======================================================== 00:09:04.488 Latency(us) 00:09:04.488 Device Information : IOPS MiB/s Average min max 00:09:04.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 9802.00 38.29 13068.91 2137.25 67929.49 00:09:04.488 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9870.60 38.56 12977.13 2223.54 70645.93 00:09:04.488 ======================================================== 00:09:04.488 Total : 19672.59 76.85 13022.86 2137.25 70645.93 00:09:04.488 00:09:04.488 07:55:10 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:04.488 07:55:10 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 34958017-70d3-4ac7-9a98-f8bbbcc9c802 00:09:04.746 07:55:10 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9b70a789-8f25-4b66-8e5b-8a68ff73011c 00:09:05.004 07:55:10 -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:05.004 07:55:10 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:05.004 07:55:10 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:05.004 07:55:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:05.004 07:55:10 -- nvmf/common.sh@116 -- # sync 00:09:05.004 07:55:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:05.004 07:55:10 -- nvmf/common.sh@119 -- # set +e 00:09:05.004 07:55:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:05.004 07:55:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:05.004 rmmod nvme_tcp 00:09:05.004 rmmod nvme_fabrics 00:09:05.004 rmmod nvme_keyring 00:09:05.004 07:55:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:05.004 07:55:10 -- nvmf/common.sh@123 -- # set -e 00:09:05.004 07:55:10 -- nvmf/common.sh@124 -- # return 0 00:09:05.004 07:55:10 -- nvmf/common.sh@477 -- # '[' -n 70938 ']' 00:09:05.004 07:55:10 -- nvmf/common.sh@478 -- # killprocess 70938 00:09:05.004 07:55:10 -- common/autotest_common.sh@926 -- # '[' -z 70938 ']' 00:09:05.004 07:55:10 -- common/autotest_common.sh@930 -- # kill -0 70938 00:09:05.004 07:55:10 -- common/autotest_common.sh@931 -- # uname 00:09:05.262 07:55:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:05.262 07:55:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 70938 00:09:05.262 killing process with pid 70938 00:09:05.262 07:55:10 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:05.263 07:55:10 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:05.263 07:55:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 70938' 00:09:05.263 07:55:10 -- common/autotest_common.sh@945 -- # kill 70938 00:09:05.263 07:55:10 -- common/autotest_common.sh@950 -- # wait 70938 00:09:05.263 07:55:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:05.263 07:55:10 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:05.263 07:55:10 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:05.263 07:55:10 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:05.263 07:55:10 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:05.263 07:55:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.263 07:55:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.263 07:55:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.263 07:55:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:05.263 ************************************ 00:09:05.263 END TEST nvmf_lvol 00:09:05.263 ************************************ 00:09:05.263 00:09:05.263 real 0m15.236s 00:09:05.263 user 1m3.715s 00:09:05.263 sys 0m4.528s 00:09:05.263 07:55:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:05.263 07:55:11 -- common/autotest_common.sh@10 -- # set +x 00:09:05.263 07:55:11 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:05.263 07:55:11 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:05.263 07:55:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:05.263 07:55:11 -- common/autotest_common.sh@10 -- # set +x 00:09:05.521 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:09:05.521 ************************************ 00:09:05.521 START TEST nvmf_lvs_grow 00:09:05.521 ************************************ 00:09:05.521 07:55:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:05.521 * Looking for test storage... 00:09:05.521 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:05.521 07:55:11 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:05.521 07:55:11 -- nvmf/common.sh@7 -- # uname -s 00:09:05.521 07:55:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:05.521 07:55:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:05.521 07:55:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:05.521 07:55:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:05.521 07:55:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:05.521 07:55:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:05.521 07:55:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:05.521 07:55:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:05.521 07:55:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:05.521 07:55:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:05.521 07:55:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:09:05.521 07:55:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:09:05.521 07:55:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:05.521 07:55:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:05.521 07:55:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:05.521 07:55:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:05.521 07:55:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:05.521 07:55:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:05.521 07:55:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:05.521 07:55:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.521 07:55:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.521 07:55:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.521 07:55:11 -- paths/export.sh@5 -- # export PATH 00:09:05.521 07:55:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:05.521 07:55:11 -- nvmf/common.sh@46 -- # : 0 00:09:05.521 07:55:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:05.521 07:55:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:05.521 07:55:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:05.521 07:55:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:05.521 07:55:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:05.521 07:55:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:05.521 07:55:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:05.521 07:55:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:05.521 07:55:11 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:05.521 07:55:11 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:05.521 07:55:11 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:09:05.521 07:55:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:05.521 07:55:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:05.521 07:55:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:05.521 07:55:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:05.521 07:55:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:05.521 07:55:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:05.521 07:55:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:05.521 07:55:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:05.521 07:55:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:05.521 07:55:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:05.521 07:55:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:05.521 07:55:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:05.521 07:55:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:05.521 07:55:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:05.521 07:55:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:05.521 07:55:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.521 07:55:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:05.521 07:55:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:05.521 07:55:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:05.521 07:55:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:05.521 07:55:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:05.521 07:55:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.521 07:55:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:05.521 07:55:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:05.521 07:55:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:05.521 07:55:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:05.521 07:55:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:05.521 07:55:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:05.521 Cannot find device "nvmf_tgt_br" 00:09:05.521 07:55:11 -- nvmf/common.sh@154 -- # true 00:09:05.521 07:55:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:05.521 Cannot find device "nvmf_tgt_br2" 00:09:05.522 07:55:11 -- nvmf/common.sh@155 -- # true 00:09:05.522 07:55:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:05.522 07:55:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:05.522 Cannot find device "nvmf_tgt_br" 00:09:05.522 07:55:11 -- nvmf/common.sh@157 -- # true 00:09:05.522 07:55:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:05.522 Cannot find device "nvmf_tgt_br2" 00:09:05.522 07:55:11 -- nvmf/common.sh@158 -- # true 00:09:05.522 07:55:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:05.522 07:55:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:05.522 07:55:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:05.522 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:05.522 07:55:11 -- nvmf/common.sh@161 -- # true 00:09:05.522 07:55:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:05.522 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:05.522 07:55:11 -- nvmf/common.sh@162 -- # true 00:09:05.522 07:55:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:05.522 07:55:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:05.522 07:55:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:05.522 07:55:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:05.522 07:55:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:05.781 07:55:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:05.781 07:55:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:05.781 07:55:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:05.781 07:55:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:05.781 07:55:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:05.781 07:55:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:05.781 07:55:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:05.781 07:55:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:05.781 07:55:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:05.781 07:55:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:05.781 07:55:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:05.781 07:55:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:05.781 07:55:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:05.781 07:55:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:05.781 07:55:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:05.781 07:55:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:05.781 07:55:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:05.781 07:55:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:05.781 07:55:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:05.781 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.781 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:09:05.781 00:09:05.781 --- 10.0.0.2 ping statistics --- 00:09:05.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.781 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:09:05.781 07:55:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:05.781 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:05.781 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.041 ms 00:09:05.781 00:09:05.781 --- 10.0.0.3 ping statistics --- 00:09:05.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.781 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:09:05.781 07:55:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:05.781 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.781 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:09:05.781 00:09:05.781 --- 10.0.0.1 ping statistics --- 00:09:05.781 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.781 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:09:05.781 07:55:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.781 07:55:11 -- nvmf/common.sh@421 -- # return 0 00:09:05.781 07:55:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:05.781 07:55:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.781 07:55:11 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:05.781 07:55:11 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:05.781 07:55:11 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.781 07:55:11 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:05.781 07:55:11 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:05.781 07:55:11 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:09:05.781 07:55:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:05.781 07:55:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:05.781 07:55:11 -- common/autotest_common.sh@10 -- # set +x 00:09:05.781 07:55:11 -- nvmf/common.sh@469 -- # nvmfpid=71247 00:09:05.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.781 07:55:11 -- nvmf/common.sh@470 -- # waitforlisten 71247 00:09:05.781 07:55:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:05.781 07:55:11 -- common/autotest_common.sh@819 -- # '[' -z 71247 ']' 00:09:05.781 07:55:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.781 07:55:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:05.781 07:55:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.781 07:55:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:05.781 07:55:11 -- common/autotest_common.sh@10 -- # set +x 00:09:05.781 [2024-07-13 07:55:11.558382] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:05.781 [2024-07-13 07:55:11.558482] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.040 [2024-07-13 07:55:11.701088] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.040 [2024-07-13 07:55:11.741011] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:06.040 [2024-07-13 07:55:11.741172] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.040 [2024-07-13 07:55:11.741197] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.040 [2024-07-13 07:55:11.741208] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.040 [2024-07-13 07:55:11.741244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.975 07:55:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:06.975 07:55:12 -- common/autotest_common.sh@852 -- # return 0 00:09:06.975 07:55:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:06.975 07:55:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:06.975 07:55:12 -- common/autotest_common.sh@10 -- # set +x 00:09:06.976 07:55:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.976 07:55:12 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:07.233 [2024-07-13 07:55:12.819419] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:07.233 07:55:12 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:09:07.233 07:55:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:09:07.233 07:55:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:07.233 07:55:12 -- common/autotest_common.sh@10 -- # set +x 00:09:07.233 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:09:07.233 ************************************ 00:09:07.233 START TEST lvs_grow_clean 00:09:07.233 ************************************ 00:09:07.233 07:55:12 -- common/autotest_common.sh@1104 -- # lvs_grow 00:09:07.233 07:55:12 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:07.233 07:55:12 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:07.233 07:55:12 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:07.233 07:55:12 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:07.233 07:55:12 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:07.233 07:55:12 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:07.233 07:55:12 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:07.233 07:55:12 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:07.233 07:55:12 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:07.491 07:55:13 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:07.491 07:55:13 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:07.749 07:55:13 -- target/nvmf_lvs_grow.sh@28 -- # lvs=5fdd6305-0655-4dee-ae94-037e72a940a7 00:09:07.749 07:55:13 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd6305-0655-4dee-ae94-037e72a940a7 00:09:07.749 07:55:13 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:08.007 07:55:13 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:08.007 07:55:13 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:08.007 07:55:13 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 5fdd6305-0655-4dee-ae94-037e72a940a7 lvol 150 00:09:08.007 07:55:13 -- target/nvmf_lvs_grow.sh@33 -- # lvol=a10029b3-41bd-4fe3-a8fd-8d7089e01f1e 00:09:08.007 07:55:13 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:08.007 07:55:13 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:08.575 [2024-07-13 07:55:14.082727] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:08.575 [2024-07-13 07:55:14.082861] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:08.575 true 00:09:08.575 07:55:14 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd6305-0655-4dee-ae94-037e72a940a7 00:09:08.575 07:55:14 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:08.575 07:55:14 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:08.575 07:55:14 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:08.834 07:55:14 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a10029b3-41bd-4fe3-a8fd-8d7089e01f1e 00:09:09.092 07:55:14 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:09.350 [2024-07-13 07:55:15.007309] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:09.350 07:55:15 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:09.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:09.609 07:55:15 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=71311 00:09:09.609 07:55:15 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:09.609 07:55:15 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:09.609 07:55:15 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 71311 /var/tmp/bdevperf.sock 00:09:09.609 07:55:15 -- common/autotest_common.sh@819 -- # '[' -z 71311 ']' 00:09:09.609 07:55:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:09.609 07:55:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:09.609 07:55:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:09.609 07:55:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:09.609 07:55:15 -- common/autotest_common.sh@10 -- # set +x 00:09:09.609 [2024-07-13 07:55:15.275511] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:09.609 [2024-07-13 07:55:15.275816] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71311 ] 00:09:09.609 [2024-07-13 07:55:15.413107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.868 [2024-07-13 07:55:15.454039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:10.435 07:55:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:10.435 07:55:16 -- common/autotest_common.sh@852 -- # return 0 00:09:10.435 07:55:16 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:10.693 Nvme0n1 00:09:10.693 07:55:16 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:10.952 [ 00:09:10.952 { 00:09:10.952 "name": "Nvme0n1", 00:09:10.952 "aliases": [ 00:09:10.952 "a10029b3-41bd-4fe3-a8fd-8d7089e01f1e" 00:09:10.952 ], 00:09:10.952 "product_name": "NVMe disk", 00:09:10.952 "block_size": 4096, 00:09:10.952 "num_blocks": 38912, 00:09:10.952 "uuid": "a10029b3-41bd-4fe3-a8fd-8d7089e01f1e", 00:09:10.952 "assigned_rate_limits": { 00:09:10.952 "rw_ios_per_sec": 0, 00:09:10.952 "rw_mbytes_per_sec": 0, 00:09:10.952 "r_mbytes_per_sec": 0, 00:09:10.952 "w_mbytes_per_sec": 0 00:09:10.952 }, 00:09:10.952 "claimed": false, 00:09:10.952 "zoned": false, 00:09:10.952 "supported_io_types": { 00:09:10.952 "read": true, 00:09:10.952 "write": true, 00:09:10.952 "unmap": true, 00:09:10.952 "write_zeroes": true, 00:09:10.952 "flush": true, 00:09:10.952 "reset": true, 00:09:10.952 "compare": true, 00:09:10.952 "compare_and_write": true, 00:09:10.952 "abort": true, 00:09:10.952 "nvme_admin": true, 00:09:10.952 "nvme_io": true 00:09:10.952 }, 00:09:10.952 "driver_specific": { 00:09:10.952 "nvme": [ 00:09:10.952 { 00:09:10.952 "trid": { 00:09:10.952 "trtype": "TCP", 00:09:10.952 "adrfam": "IPv4", 00:09:10.952 "traddr": "10.0.0.2", 00:09:10.952 "trsvcid": "4420", 00:09:10.952 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:10.952 }, 00:09:10.952 "ctrlr_data": { 00:09:10.952 "cntlid": 1, 00:09:10.952 "vendor_id": "0x8086", 00:09:10.952 "model_number": "SPDK bdev Controller", 00:09:10.952 "serial_number": "SPDK0", 00:09:10.952 "firmware_revision": "24.01.1", 00:09:10.952 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:10.952 "oacs": { 00:09:10.952 "security": 0, 00:09:10.952 "format": 0, 00:09:10.952 "firmware": 0, 00:09:10.952 "ns_manage": 0 00:09:10.952 }, 00:09:10.952 "multi_ctrlr": true, 00:09:10.952 "ana_reporting": false 00:09:10.952 }, 00:09:10.952 "vs": { 00:09:10.952 "nvme_version": "1.3" 00:09:10.952 }, 00:09:10.952 "ns_data": { 00:09:10.952 "id": 1, 00:09:10.952 "can_share": true 00:09:10.952 } 00:09:10.952 } 00:09:10.952 ], 00:09:10.952 "mp_policy": "active_passive" 00:09:10.952 } 00:09:10.952 } 00:09:10.952 ] 00:09:10.952 07:55:16 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=71323 00:09:10.952 07:55:16 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:10.952 07:55:16 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:11.211 Running I/O for 10 seconds... 00:09:12.147 Latency(us) 00:09:12.147 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:12.147 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:12.147 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:12.147 =================================================================================================================== 00:09:12.147 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:12.147 00:09:13.107 07:55:18 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5fdd6305-0655-4dee-ae94-037e72a940a7 00:09:13.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:13.107 Nvme0n1 : 2.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:13.107 =================================================================================================================== 00:09:13.107 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:13.107 00:09:13.366 true 00:09:13.366 07:55:19 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd6305-0655-4dee-ae94-037e72a940a7 00:09:13.366 07:55:19 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:13.624 07:55:19 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:13.624 07:55:19 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:13.624 07:55:19 -- target/nvmf_lvs_grow.sh@65 -- # wait 71323 00:09:14.192 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:14.192 Nvme0n1 : 3.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:14.192 =================================================================================================================== 00:09:14.192 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:14.192 00:09:15.126 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:15.126 Nvme0n1 : 4.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:15.126 =================================================================================================================== 00:09:15.126 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:15.126 00:09:16.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:16.104 Nvme0n1 : 5.00 6959.60 27.19 0.00 0.00 0.00 0.00 0.00 00:09:16.104 =================================================================================================================== 00:09:16.104 Total : 6959.60 27.19 0.00 0.00 0.00 0.00 0.00 00:09:16.104 00:09:17.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:17.039 Nvme0n1 : 6.00 6942.67 27.12 0.00 0.00 0.00 0.00 0.00 00:09:17.039 =================================================================================================================== 00:09:17.039 Total : 6942.67 27.12 0.00 0.00 0.00 0.00 0.00 00:09:17.039 00:09:18.411 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:18.411 Nvme0n1 : 7.00 6930.57 27.07 0.00 0.00 0.00 0.00 0.00 00:09:18.411 =================================================================================================================== 00:09:18.411 Total : 6930.57 27.07 0.00 0.00 0.00 0.00 0.00 00:09:18.411 00:09:19.347 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:19.347 Nvme0n1 : 8.00 6889.75 26.91 0.00 0.00 0.00 0.00 0.00 00:09:19.347 =================================================================================================================== 00:09:19.347 Total : 6889.75 26.91 0.00 0.00 0.00 0.00 0.00 00:09:19.347 00:09:20.285 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:20.285 Nvme0n1 : 9.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:20.285 =================================================================================================================== 00:09:20.285 Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:20.285 00:09:21.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.217 Nvme0n1 : 10.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:21.217 =================================================================================================================== 00:09:21.217 Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:21.217 00:09:21.217 00:09:21.217 Latency(us) 00:09:21.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.217 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.217 Nvme0n1 : 10.01 6863.73 26.81 0.00 0.00 18642.54 15847.80 40513.16 00:09:21.217 =================================================================================================================== 00:09:21.217 Total : 6863.73 26.81 0.00 0.00 18642.54 15847.80 40513.16 00:09:21.217 0 00:09:21.217 07:55:26 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 71311 00:09:21.217 07:55:26 -- common/autotest_common.sh@926 -- # '[' -z 71311 ']' 00:09:21.217 07:55:26 -- common/autotest_common.sh@930 -- # kill -0 71311 00:09:21.217 07:55:26 -- common/autotest_common.sh@931 -- # uname 00:09:21.217 07:55:26 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:21.217 07:55:26 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71311 00:09:21.217 07:55:26 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:09:21.217 07:55:26 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:09:21.217 killing process with pid 71311 00:09:21.217 07:55:26 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71311' 00:09:21.217 07:55:26 -- common/autotest_common.sh@945 -- # kill 71311 00:09:21.217 Received shutdown signal, test time was about 10.000000 seconds 00:09:21.217 00:09:21.217 Latency(us) 00:09:21.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.217 =================================================================================================================== 00:09:21.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:21.217 07:55:26 -- common/autotest_common.sh@950 -- # wait 71311 00:09:21.217 07:55:27 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:21.476 07:55:27 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd6305-0655-4dee-ae94-037e72a940a7 00:09:21.476 07:55:27 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:21.735 07:55:27 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:21.735 07:55:27 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:09:21.735 07:55:27 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:21.993 [2024-07-13 07:55:27.727376] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:21.993 07:55:27 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd6305-0655-4dee-ae94-037e72a940a7 00:09:21.993 07:55:27 -- common/autotest_common.sh@640 -- # local es=0 00:09:21.993 07:55:27 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd6305-0655-4dee-ae94-037e72a940a7 00:09:21.993 07:55:27 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.993 07:55:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:21.993 07:55:27 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.993 07:55:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:21.993 07:55:27 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.993 07:55:27 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:21.994 07:55:27 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:21.994 07:55:27 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:21.994 07:55:27 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd6305-0655-4dee-ae94-037e72a940a7 00:09:22.252 request: 00:09:22.252 { 00:09:22.252 "uuid": "5fdd6305-0655-4dee-ae94-037e72a940a7", 00:09:22.252 "method": "bdev_lvol_get_lvstores", 00:09:22.252 "req_id": 1 00:09:22.252 } 00:09:22.252 Got JSON-RPC error response 00:09:22.252 response: 00:09:22.252 { 00:09:22.252 "code": -19, 00:09:22.252 "message": "No such device" 00:09:22.252 } 00:09:22.252 07:55:27 -- common/autotest_common.sh@643 -- # es=1 00:09:22.252 07:55:27 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:22.252 07:55:27 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:22.252 07:55:27 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:22.252 07:55:27 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:22.511 aio_bdev 00:09:22.511 07:55:28 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev a10029b3-41bd-4fe3-a8fd-8d7089e01f1e 00:09:22.511 07:55:28 -- common/autotest_common.sh@887 -- # local bdev_name=a10029b3-41bd-4fe3-a8fd-8d7089e01f1e 00:09:22.511 07:55:28 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:22.511 07:55:28 -- common/autotest_common.sh@889 -- # local i 00:09:22.511 07:55:28 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:22.511 07:55:28 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:22.511 07:55:28 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:22.769 07:55:28 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b a10029b3-41bd-4fe3-a8fd-8d7089e01f1e -t 2000 00:09:23.028 [ 00:09:23.028 { 00:09:23.028 "name": "a10029b3-41bd-4fe3-a8fd-8d7089e01f1e", 00:09:23.028 "aliases": [ 00:09:23.028 "lvs/lvol" 00:09:23.028 ], 00:09:23.028 "product_name": "Logical Volume", 00:09:23.028 "block_size": 4096, 00:09:23.028 "num_blocks": 38912, 00:09:23.028 "uuid": "a10029b3-41bd-4fe3-a8fd-8d7089e01f1e", 00:09:23.028 "assigned_rate_limits": { 00:09:23.028 "rw_ios_per_sec": 0, 00:09:23.028 "rw_mbytes_per_sec": 0, 00:09:23.028 "r_mbytes_per_sec": 0, 00:09:23.028 "w_mbytes_per_sec": 0 00:09:23.028 }, 00:09:23.028 "claimed": false, 00:09:23.028 "zoned": false, 00:09:23.028 "supported_io_types": { 00:09:23.028 "read": true, 00:09:23.028 "write": true, 00:09:23.028 "unmap": true, 00:09:23.028 "write_zeroes": true, 00:09:23.028 "flush": false, 00:09:23.028 "reset": true, 00:09:23.028 "compare": false, 00:09:23.028 "compare_and_write": false, 00:09:23.028 "abort": false, 00:09:23.028 "nvme_admin": false, 00:09:23.028 "nvme_io": false 00:09:23.028 }, 00:09:23.028 "driver_specific": { 00:09:23.028 "lvol": { 00:09:23.028 "lvol_store_uuid": "5fdd6305-0655-4dee-ae94-037e72a940a7", 00:09:23.028 "base_bdev": "aio_bdev", 00:09:23.028 "thin_provision": false, 00:09:23.028 "snapshot": false, 00:09:23.028 "clone": false, 00:09:23.028 "esnap_clone": false 00:09:23.028 } 00:09:23.028 } 00:09:23.028 } 00:09:23.028 ] 00:09:23.028 07:55:28 -- common/autotest_common.sh@895 -- # return 0 00:09:23.028 07:55:28 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd6305-0655-4dee-ae94-037e72a940a7 00:09:23.028 07:55:28 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:23.287 07:55:28 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:23.287 07:55:28 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5fdd6305-0655-4dee-ae94-037e72a940a7 00:09:23.287 07:55:28 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:23.546 07:55:29 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:23.546 07:55:29 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a10029b3-41bd-4fe3-a8fd-8d7089e01f1e 00:09:23.546 07:55:29 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5fdd6305-0655-4dee-ae94-037e72a940a7 00:09:23.805 07:55:29 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:24.063 07:55:29 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:24.320 00:09:24.320 real 0m17.276s 00:09:24.320 user 0m16.434s 00:09:24.320 sys 0m2.234s 00:09:24.320 07:55:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:24.320 07:55:30 -- common/autotest_common.sh@10 -- # set +x 00:09:24.320 ************************************ 00:09:24.320 END TEST lvs_grow_clean 00:09:24.320 ************************************ 00:09:24.579 07:55:30 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:24.579 07:55:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:24.579 07:55:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:24.579 07:55:30 -- common/autotest_common.sh@10 -- # set +x 00:09:24.579 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:09:24.579 ************************************ 00:09:24.579 START TEST lvs_grow_dirty 00:09:24.579 ************************************ 00:09:24.579 07:55:30 -- common/autotest_common.sh@1104 -- # lvs_grow dirty 00:09:24.579 07:55:30 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:24.579 07:55:30 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:24.579 07:55:30 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:24.579 07:55:30 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:24.579 07:55:30 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:24.579 07:55:30 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:24.579 07:55:30 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:24.579 07:55:30 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:24.579 07:55:30 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:24.837 07:55:30 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:24.837 07:55:30 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:25.116 07:55:30 -- target/nvmf_lvs_grow.sh@28 -- # lvs=a8987129-1376-42d7-8730-f7dcd4c129cc 00:09:25.116 07:55:30 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8987129-1376-42d7-8730-f7dcd4c129cc 00:09:25.116 07:55:30 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:25.391 07:55:31 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:25.391 07:55:31 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:25.391 07:55:31 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u a8987129-1376-42d7-8730-f7dcd4c129cc lvol 150 00:09:25.649 07:55:31 -- target/nvmf_lvs_grow.sh@33 -- # lvol=0c36500e-09a2-464e-b404-ee610413be69 00:09:25.649 07:55:31 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:25.649 07:55:31 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:25.906 [2024-07-13 07:55:31.541066] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:25.906 [2024-07-13 07:55:31.541157] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:25.906 true 00:09:25.906 07:55:31 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:25.906 07:55:31 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8987129-1376-42d7-8730-f7dcd4c129cc 00:09:26.165 07:55:31 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:26.165 07:55:31 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:26.423 07:55:32 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0c36500e-09a2-464e-b404-ee610413be69 00:09:26.682 07:55:32 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:26.940 07:55:32 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:27.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:27.209 07:55:32 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=71466 00:09:27.209 07:55:32 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:27.209 07:55:32 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:27.209 07:55:32 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 71466 /var/tmp/bdevperf.sock 00:09:27.209 07:55:32 -- common/autotest_common.sh@819 -- # '[' -z 71466 ']' 00:09:27.209 07:55:32 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:27.209 07:55:32 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:27.209 07:55:32 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:27.209 07:55:32 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:27.209 07:55:32 -- common/autotest_common.sh@10 -- # set +x 00:09:27.209 [2024-07-13 07:55:32.858396] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:27.209 [2024-07-13 07:55:32.858668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71466 ] 00:09:27.209 [2024-07-13 07:55:32.990761] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:27.472 [2024-07-13 07:55:33.025450] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:28.037 07:55:33 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:28.037 07:55:33 -- common/autotest_common.sh@852 -- # return 0 00:09:28.037 07:55:33 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:28.602 Nvme0n1 00:09:28.602 07:55:34 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:28.860 [ 00:09:28.860 { 00:09:28.860 "name": "Nvme0n1", 00:09:28.860 "aliases": [ 00:09:28.860 "0c36500e-09a2-464e-b404-ee610413be69" 00:09:28.860 ], 00:09:28.860 "product_name": "NVMe disk", 00:09:28.860 "block_size": 4096, 00:09:28.860 "num_blocks": 38912, 00:09:28.860 "uuid": "0c36500e-09a2-464e-b404-ee610413be69", 00:09:28.860 "assigned_rate_limits": { 00:09:28.860 "rw_ios_per_sec": 0, 00:09:28.860 "rw_mbytes_per_sec": 0, 00:09:28.860 "r_mbytes_per_sec": 0, 00:09:28.860 "w_mbytes_per_sec": 0 00:09:28.860 }, 00:09:28.860 "claimed": false, 00:09:28.860 "zoned": false, 00:09:28.860 "supported_io_types": { 00:09:28.860 "read": true, 00:09:28.860 "write": true, 00:09:28.860 "unmap": true, 00:09:28.860 "write_zeroes": true, 00:09:28.860 "flush": true, 00:09:28.860 "reset": true, 00:09:28.860 "compare": true, 00:09:28.860 "compare_and_write": true, 00:09:28.860 "abort": true, 00:09:28.860 "nvme_admin": true, 00:09:28.860 "nvme_io": true 00:09:28.860 }, 00:09:28.860 "driver_specific": { 00:09:28.860 "nvme": [ 00:09:28.860 { 00:09:28.860 "trid": { 00:09:28.860 "trtype": "TCP", 00:09:28.860 "adrfam": "IPv4", 00:09:28.860 "traddr": "10.0.0.2", 00:09:28.860 "trsvcid": "4420", 00:09:28.860 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:28.860 }, 00:09:28.860 "ctrlr_data": { 00:09:28.860 "cntlid": 1, 00:09:28.860 "vendor_id": "0x8086", 00:09:28.860 "model_number": "SPDK bdev Controller", 00:09:28.860 "serial_number": "SPDK0", 00:09:28.860 "firmware_revision": "24.01.1", 00:09:28.860 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:28.860 "oacs": { 00:09:28.860 "security": 0, 00:09:28.860 "format": 0, 00:09:28.860 "firmware": 0, 00:09:28.860 "ns_manage": 0 00:09:28.860 }, 00:09:28.860 "multi_ctrlr": true, 00:09:28.860 "ana_reporting": false 00:09:28.860 }, 00:09:28.860 "vs": { 00:09:28.860 "nvme_version": "1.3" 00:09:28.860 }, 00:09:28.860 "ns_data": { 00:09:28.860 "id": 1, 00:09:28.860 "can_share": true 00:09:28.860 } 00:09:28.860 } 00:09:28.860 ], 00:09:28.860 "mp_policy": "active_passive" 00:09:28.860 } 00:09:28.860 } 00:09:28.860 ] 00:09:28.860 07:55:34 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=71483 00:09:28.860 07:55:34 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:28.860 07:55:34 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:28.860 Running I/O for 10 seconds... 00:09:29.794 Latency(us) 00:09:29.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:29.794 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.794 Nvme0n1 : 1.00 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:29.794 =================================================================================================================== 00:09:29.794 Total : 6985.00 27.29 0.00 0.00 0.00 0.00 0.00 00:09:29.794 00:09:30.728 07:55:36 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u a8987129-1376-42d7-8730-f7dcd4c129cc 00:09:30.986 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.986 Nvme0n1 : 2.00 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:30.986 =================================================================================================================== 00:09:30.986 Total : 6858.00 26.79 0.00 0.00 0.00 0.00 0.00 00:09:30.986 00:09:30.986 true 00:09:30.986 07:55:36 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8987129-1376-42d7-8730-f7dcd4c129cc 00:09:30.986 07:55:36 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:31.245 07:55:37 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:31.245 07:55:37 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:31.245 07:55:37 -- target/nvmf_lvs_grow.sh@65 -- # wait 71483 00:09:31.813 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:31.813 Nvme0n1 : 3.00 6900.33 26.95 0.00 0.00 0.00 0.00 0.00 00:09:31.813 =================================================================================================================== 00:09:31.813 Total : 6900.33 26.95 0.00 0.00 0.00 0.00 0.00 00:09:31.813 00:09:32.750 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:32.750 Nvme0n1 : 4.00 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:09:32.750 =================================================================================================================== 00:09:32.750 Total : 6921.50 27.04 0.00 0.00 0.00 0.00 0.00 00:09:32.750 00:09:34.125 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:34.125 Nvme0n1 : 5.00 6908.80 26.99 0.00 0.00 0.00 0.00 0.00 00:09:34.125 =================================================================================================================== 00:09:34.125 Total : 6908.80 26.99 0.00 0.00 0.00 0.00 0.00 00:09:34.125 00:09:35.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.061 Nvme0n1 : 6.00 6900.33 26.95 0.00 0.00 0.00 0.00 0.00 00:09:35.061 =================================================================================================================== 00:09:35.061 Total : 6900.33 26.95 0.00 0.00 0.00 0.00 0.00 00:09:35.061 00:09:35.996 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:35.996 Nvme0n1 : 7.00 6876.14 26.86 0.00 0.00 0.00 0.00 0.00 00:09:35.996 =================================================================================================================== 00:09:35.996 Total : 6876.14 26.86 0.00 0.00 0.00 0.00 0.00 00:09:35.996 00:09:36.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:36.932 Nvme0n1 : 8.00 6663.88 26.03 0.00 0.00 0.00 0.00 0.00 00:09:36.932 =================================================================================================================== 00:09:36.932 Total : 6663.88 26.03 0.00 0.00 0.00 0.00 0.00 00:09:36.932 00:09:37.867 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:37.867 Nvme0n1 : 9.00 6629.00 25.89 0.00 0.00 0.00 0.00 0.00 00:09:37.867 =================================================================================================================== 00:09:37.867 Total : 6629.00 25.89 0.00 0.00 0.00 0.00 0.00 00:09:37.867 00:09:38.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.799 Nvme0n1 : 10.00 6613.80 25.84 0.00 0.00 0.00 0.00 0.00 00:09:38.799 =================================================================================================================== 00:09:38.799 Total : 6613.80 25.84 0.00 0.00 0.00 0.00 0.00 00:09:38.799 00:09:38.799 00:09:38.799 Latency(us) 00:09:38.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:38.799 Nvme0n1 : 10.02 6616.02 25.84 0.00 0.00 19342.63 6285.50 265003.75 00:09:38.799 =================================================================================================================== 00:09:38.799 Total : 6616.02 25.84 0.00 0.00 19342.63 6285.50 265003.75 00:09:38.799 0 00:09:38.799 07:55:44 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 71466 00:09:38.799 07:55:44 -- common/autotest_common.sh@926 -- # '[' -z 71466 ']' 00:09:38.799 07:55:44 -- common/autotest_common.sh@930 -- # kill -0 71466 00:09:38.799 07:55:44 -- common/autotest_common.sh@931 -- # uname 00:09:38.799 07:55:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:38.799 07:55:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71466 00:09:38.799 killing process with pid 71466 00:09:38.799 Received shutdown signal, test time was about 10.000000 seconds 00:09:38.799 00:09:38.799 Latency(us) 00:09:38.799 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:38.799 =================================================================================================================== 00:09:38.799 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:38.799 07:55:44 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:09:38.799 07:55:44 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:09:38.799 07:55:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71466' 00:09:38.799 07:55:44 -- common/autotest_common.sh@945 -- # kill 71466 00:09:38.799 07:55:44 -- common/autotest_common.sh@950 -- # wait 71466 00:09:39.058 07:55:44 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:39.316 07:55:44 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8987129-1376-42d7-8730-f7dcd4c129cc 00:09:39.316 07:55:44 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:09:39.575 07:55:45 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:09:39.575 07:55:45 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:09:39.575 07:55:45 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 71247 00:09:39.575 07:55:45 -- target/nvmf_lvs_grow.sh@74 -- # wait 71247 00:09:39.575 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 71247 Killed "${NVMF_APP[@]}" "$@" 00:09:39.575 07:55:45 -- target/nvmf_lvs_grow.sh@74 -- # true 00:09:39.575 07:55:45 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:09:39.575 07:55:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:39.575 07:55:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:39.575 07:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:39.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:39.575 07:55:45 -- nvmf/common.sh@469 -- # nvmfpid=71549 00:09:39.575 07:55:45 -- nvmf/common.sh@470 -- # waitforlisten 71549 00:09:39.575 07:55:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:39.575 07:55:45 -- common/autotest_common.sh@819 -- # '[' -z 71549 ']' 00:09:39.575 07:55:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:39.575 07:55:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:39.575 07:55:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:39.575 07:55:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:39.575 07:55:45 -- common/autotest_common.sh@10 -- # set +x 00:09:39.575 [2024-07-13 07:55:45.270520] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:39.575 [2024-07-13 07:55:45.270606] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:39.834 [2024-07-13 07:55:45.411450] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:39.834 [2024-07-13 07:55:45.443017] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:39.834 [2024-07-13 07:55:45.443172] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:39.834 [2024-07-13 07:55:45.443185] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:39.834 [2024-07-13 07:55:45.443207] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:39.834 [2024-07-13 07:55:45.443236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:40.402 07:55:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:40.402 07:55:46 -- common/autotest_common.sh@852 -- # return 0 00:09:40.402 07:55:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:40.402 07:55:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:40.402 07:55:46 -- common/autotest_common.sh@10 -- # set +x 00:09:40.661 07:55:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:40.661 07:55:46 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:40.661 [2024-07-13 07:55:46.435319] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:40.661 [2024-07-13 07:55:46.435675] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:40.661 [2024-07-13 07:55:46.435947] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:40.929 07:55:46 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:09:40.929 07:55:46 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 0c36500e-09a2-464e-b404-ee610413be69 00:09:40.929 07:55:46 -- common/autotest_common.sh@887 -- # local bdev_name=0c36500e-09a2-464e-b404-ee610413be69 00:09:40.929 07:55:46 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:40.929 07:55:46 -- common/autotest_common.sh@889 -- # local i 00:09:40.929 07:55:46 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:40.929 07:55:46 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:40.929 07:55:46 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:41.233 07:55:46 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c36500e-09a2-464e-b404-ee610413be69 -t 2000 00:09:41.233 [ 00:09:41.233 { 00:09:41.233 "name": "0c36500e-09a2-464e-b404-ee610413be69", 00:09:41.233 "aliases": [ 00:09:41.233 "lvs/lvol" 00:09:41.233 ], 00:09:41.233 "product_name": "Logical Volume", 00:09:41.233 "block_size": 4096, 00:09:41.233 "num_blocks": 38912, 00:09:41.233 "uuid": "0c36500e-09a2-464e-b404-ee610413be69", 00:09:41.233 "assigned_rate_limits": { 00:09:41.233 "rw_ios_per_sec": 0, 00:09:41.233 "rw_mbytes_per_sec": 0, 00:09:41.233 "r_mbytes_per_sec": 0, 00:09:41.233 "w_mbytes_per_sec": 0 00:09:41.233 }, 00:09:41.233 "claimed": false, 00:09:41.233 "zoned": false, 00:09:41.233 "supported_io_types": { 00:09:41.233 "read": true, 00:09:41.233 "write": true, 00:09:41.233 "unmap": true, 00:09:41.233 "write_zeroes": true, 00:09:41.233 "flush": false, 00:09:41.233 "reset": true, 00:09:41.233 "compare": false, 00:09:41.233 "compare_and_write": false, 00:09:41.233 "abort": false, 00:09:41.233 "nvme_admin": false, 00:09:41.233 "nvme_io": false 00:09:41.233 }, 00:09:41.233 "driver_specific": { 00:09:41.233 "lvol": { 00:09:41.233 "lvol_store_uuid": "a8987129-1376-42d7-8730-f7dcd4c129cc", 00:09:41.233 "base_bdev": "aio_bdev", 00:09:41.233 "thin_provision": false, 00:09:41.233 "snapshot": false, 00:09:41.233 "clone": false, 00:09:41.233 "esnap_clone": false 00:09:41.233 } 00:09:41.233 } 00:09:41.233 } 00:09:41.233 ] 00:09:41.233 07:55:46 -- common/autotest_common.sh@895 -- # return 0 00:09:41.233 07:55:46 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:09:41.233 07:55:46 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8987129-1376-42d7-8730-f7dcd4c129cc 00:09:41.498 07:55:47 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:09:41.498 07:55:47 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8987129-1376-42d7-8730-f7dcd4c129cc 00:09:41.498 07:55:47 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:09:41.757 07:55:47 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:09:41.757 07:55:47 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:42.016 [2024-07-13 07:55:47.649528] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:42.016 07:55:47 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8987129-1376-42d7-8730-f7dcd4c129cc 00:09:42.016 07:55:47 -- common/autotest_common.sh@640 -- # local es=0 00:09:42.016 07:55:47 -- common/autotest_common.sh@642 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8987129-1376-42d7-8730-f7dcd4c129cc 00:09:42.016 07:55:47 -- common/autotest_common.sh@628 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.016 07:55:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:42.016 07:55:47 -- common/autotest_common.sh@632 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.016 07:55:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:42.016 07:55:47 -- common/autotest_common.sh@634 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.016 07:55:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:09:42.016 07:55:47 -- common/autotest_common.sh@634 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:42.016 07:55:47 -- common/autotest_common.sh@634 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:09:42.016 07:55:47 -- common/autotest_common.sh@643 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8987129-1376-42d7-8730-f7dcd4c129cc 00:09:42.275 request: 00:09:42.275 { 00:09:42.275 "uuid": "a8987129-1376-42d7-8730-f7dcd4c129cc", 00:09:42.275 "method": "bdev_lvol_get_lvstores", 00:09:42.275 "req_id": 1 00:09:42.275 } 00:09:42.275 Got JSON-RPC error response 00:09:42.275 response: 00:09:42.275 { 00:09:42.275 "code": -19, 00:09:42.275 "message": "No such device" 00:09:42.275 } 00:09:42.275 07:55:47 -- common/autotest_common.sh@643 -- # es=1 00:09:42.275 07:55:47 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:09:42.275 07:55:47 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:09:42.275 07:55:47 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:09:42.275 07:55:47 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:42.534 aio_bdev 00:09:42.534 07:55:48 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 0c36500e-09a2-464e-b404-ee610413be69 00:09:42.534 07:55:48 -- common/autotest_common.sh@887 -- # local bdev_name=0c36500e-09a2-464e-b404-ee610413be69 00:09:42.534 07:55:48 -- common/autotest_common.sh@888 -- # local bdev_timeout= 00:09:42.534 07:55:48 -- common/autotest_common.sh@889 -- # local i 00:09:42.534 07:55:48 -- common/autotest_common.sh@890 -- # [[ -z '' ]] 00:09:42.534 07:55:48 -- common/autotest_common.sh@890 -- # bdev_timeout=2000 00:09:42.534 07:55:48 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:42.793 07:55:48 -- common/autotest_common.sh@894 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0c36500e-09a2-464e-b404-ee610413be69 -t 2000 00:09:42.793 [ 00:09:42.793 { 00:09:42.793 "name": "0c36500e-09a2-464e-b404-ee610413be69", 00:09:42.793 "aliases": [ 00:09:42.793 "lvs/lvol" 00:09:42.793 ], 00:09:42.793 "product_name": "Logical Volume", 00:09:42.793 "block_size": 4096, 00:09:42.793 "num_blocks": 38912, 00:09:42.793 "uuid": "0c36500e-09a2-464e-b404-ee610413be69", 00:09:42.793 "assigned_rate_limits": { 00:09:42.793 "rw_ios_per_sec": 0, 00:09:42.793 "rw_mbytes_per_sec": 0, 00:09:42.793 "r_mbytes_per_sec": 0, 00:09:42.793 "w_mbytes_per_sec": 0 00:09:42.793 }, 00:09:42.793 "claimed": false, 00:09:42.793 "zoned": false, 00:09:42.793 "supported_io_types": { 00:09:42.793 "read": true, 00:09:42.793 "write": true, 00:09:42.793 "unmap": true, 00:09:42.793 "write_zeroes": true, 00:09:42.793 "flush": false, 00:09:42.793 "reset": true, 00:09:42.793 "compare": false, 00:09:42.793 "compare_and_write": false, 00:09:42.793 "abort": false, 00:09:42.793 "nvme_admin": false, 00:09:42.793 "nvme_io": false 00:09:42.793 }, 00:09:42.793 "driver_specific": { 00:09:42.793 "lvol": { 00:09:42.793 "lvol_store_uuid": "a8987129-1376-42d7-8730-f7dcd4c129cc", 00:09:42.793 "base_bdev": "aio_bdev", 00:09:42.793 "thin_provision": false, 00:09:42.793 "snapshot": false, 00:09:42.793 "clone": false, 00:09:42.793 "esnap_clone": false 00:09:42.793 } 00:09:42.793 } 00:09:42.793 } 00:09:42.793 ] 00:09:42.793 07:55:48 -- common/autotest_common.sh@895 -- # return 0 00:09:42.793 07:55:48 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8987129-1376-42d7-8730-f7dcd4c129cc 00:09:42.793 07:55:48 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:09:43.051 07:55:48 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:09:43.051 07:55:48 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u a8987129-1376-42d7-8730-f7dcd4c129cc 00:09:43.051 07:55:48 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:09:43.310 07:55:49 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:09:43.310 07:55:49 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0c36500e-09a2-464e-b404-ee610413be69 00:09:43.569 07:55:49 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a8987129-1376-42d7-8730-f7dcd4c129cc 00:09:43.828 07:55:49 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:43.828 07:55:49 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:09:44.395 ************************************ 00:09:44.395 END TEST lvs_grow_dirty 00:09:44.395 ************************************ 00:09:44.395 00:09:44.395 real 0m19.741s 00:09:44.395 user 0m40.352s 00:09:44.395 sys 0m8.993s 00:09:44.395 07:55:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.395 07:55:49 -- common/autotest_common.sh@10 -- # set +x 00:09:44.395 07:55:49 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:44.395 07:55:49 -- common/autotest_common.sh@796 -- # type=--id 00:09:44.395 07:55:49 -- common/autotest_common.sh@797 -- # id=0 00:09:44.395 07:55:49 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:09:44.395 07:55:49 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:44.395 07:55:49 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:09:44.395 07:55:49 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:09:44.396 07:55:49 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:09:44.396 07:55:49 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:44.396 nvmf_trace.0 00:09:44.396 07:55:49 -- common/autotest_common.sh@811 -- # return 0 00:09:44.396 07:55:49 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:44.396 07:55:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:44.396 07:55:49 -- nvmf/common.sh@116 -- # sync 00:09:44.396 07:55:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:44.396 07:55:50 -- nvmf/common.sh@119 -- # set +e 00:09:44.396 07:55:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:44.396 07:55:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:44.655 rmmod nvme_tcp 00:09:44.655 rmmod nvme_fabrics 00:09:44.655 rmmod nvme_keyring 00:09:44.655 07:55:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:44.655 07:55:50 -- nvmf/common.sh@123 -- # set -e 00:09:44.655 07:55:50 -- nvmf/common.sh@124 -- # return 0 00:09:44.655 07:55:50 -- nvmf/common.sh@477 -- # '[' -n 71549 ']' 00:09:44.655 07:55:50 -- nvmf/common.sh@478 -- # killprocess 71549 00:09:44.655 07:55:50 -- common/autotest_common.sh@926 -- # '[' -z 71549 ']' 00:09:44.655 07:55:50 -- common/autotest_common.sh@930 -- # kill -0 71549 00:09:44.655 07:55:50 -- common/autotest_common.sh@931 -- # uname 00:09:44.655 07:55:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:44.655 07:55:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71549 00:09:44.655 killing process with pid 71549 00:09:44.655 07:55:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:44.655 07:55:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:44.655 07:55:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71549' 00:09:44.655 07:55:50 -- common/autotest_common.sh@945 -- # kill 71549 00:09:44.655 07:55:50 -- common/autotest_common.sh@950 -- # wait 71549 00:09:44.655 07:55:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:44.655 07:55:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:44.655 07:55:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:44.655 07:55:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:44.655 07:55:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:44.655 07:55:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.655 07:55:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.655 07:55:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.655 07:55:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:44.655 ************************************ 00:09:44.655 END TEST nvmf_lvs_grow 00:09:44.655 ************************************ 00:09:44.655 00:09:44.655 real 0m39.372s 00:09:44.655 user 1m2.587s 00:09:44.655 sys 0m11.945s 00:09:44.655 07:55:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:44.655 07:55:50 -- common/autotest_common.sh@10 -- # set +x 00:09:44.914 07:55:50 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:44.914 07:55:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:44.914 07:55:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:44.914 07:55:50 -- common/autotest_common.sh@10 -- # set +x 00:09:44.914 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:09:44.914 ************************************ 00:09:44.914 START TEST nvmf_bdev_io_wait 00:09:44.914 ************************************ 00:09:44.914 07:55:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:44.915 * Looking for test storage... 00:09:44.915 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:44.915 07:55:50 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:44.915 07:55:50 -- nvmf/common.sh@7 -- # uname -s 00:09:44.915 07:55:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:44.915 07:55:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:44.915 07:55:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:44.915 07:55:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:44.915 07:55:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:44.915 07:55:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:44.915 07:55:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:44.915 07:55:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:44.915 07:55:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:44.915 07:55:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:44.915 07:55:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:09:44.915 07:55:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:09:44.915 07:55:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:44.915 07:55:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:44.915 07:55:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:44.915 07:55:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:44.915 07:55:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:44.915 07:55:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:44.915 07:55:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:44.915 07:55:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.915 07:55:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.915 07:55:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.915 07:55:50 -- paths/export.sh@5 -- # export PATH 00:09:44.915 07:55:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:44.915 07:55:50 -- nvmf/common.sh@46 -- # : 0 00:09:44.915 07:55:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:44.915 07:55:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:44.915 07:55:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:44.915 07:55:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:44.915 07:55:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:44.915 07:55:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:44.915 07:55:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:44.915 07:55:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:44.915 07:55:50 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:44.915 07:55:50 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:44.915 07:55:50 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:44.915 07:55:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:44.915 07:55:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:44.915 07:55:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:44.915 07:55:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:44.915 07:55:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:44.915 07:55:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:44.915 07:55:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:44.915 07:55:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.915 07:55:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:44.915 07:55:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:44.915 07:55:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:44.915 07:55:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:44.915 07:55:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:44.915 07:55:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:44.915 07:55:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:44.915 07:55:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:44.915 07:55:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:44.915 07:55:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:44.915 07:55:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:44.915 07:55:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:44.915 07:55:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:44.915 07:55:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:44.915 07:55:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:44.915 07:55:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:44.915 07:55:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:44.915 07:55:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:44.915 07:55:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:44.915 07:55:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:44.915 Cannot find device "nvmf_tgt_br" 00:09:44.915 07:55:50 -- nvmf/common.sh@154 -- # true 00:09:44.915 07:55:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:44.915 Cannot find device "nvmf_tgt_br2" 00:09:44.915 07:55:50 -- nvmf/common.sh@155 -- # true 00:09:44.915 07:55:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:44.915 07:55:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:44.915 Cannot find device "nvmf_tgt_br" 00:09:44.915 07:55:50 -- nvmf/common.sh@157 -- # true 00:09:44.915 07:55:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:44.915 Cannot find device "nvmf_tgt_br2" 00:09:44.915 07:55:50 -- nvmf/common.sh@158 -- # true 00:09:44.915 07:55:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:45.174 07:55:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:45.174 07:55:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:45.174 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.174 07:55:50 -- nvmf/common.sh@161 -- # true 00:09:45.174 07:55:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:45.174 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:45.174 07:55:50 -- nvmf/common.sh@162 -- # true 00:09:45.174 07:55:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:45.174 07:55:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:45.174 07:55:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:45.174 07:55:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:45.174 07:55:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:45.174 07:55:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:45.174 07:55:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:45.174 07:55:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:45.174 07:55:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:45.174 07:55:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:45.174 07:55:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:45.174 07:55:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:45.174 07:55:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:45.174 07:55:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:45.174 07:55:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:45.174 07:55:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:45.174 07:55:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:45.174 07:55:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:45.174 07:55:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:45.174 07:55:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:45.174 07:55:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:45.174 07:55:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:45.174 07:55:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:45.174 07:55:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:45.174 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.174 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.052 ms 00:09:45.174 00:09:45.174 --- 10.0.0.2 ping statistics --- 00:09:45.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.174 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:09:45.174 07:55:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:45.174 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:45.174 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:09:45.174 00:09:45.174 --- 10.0.0.3 ping statistics --- 00:09:45.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.174 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:09:45.174 07:55:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:45.174 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.174 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:45.174 00:09:45.174 --- 10.0.0.1 ping statistics --- 00:09:45.174 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.174 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:45.174 07:55:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.174 07:55:50 -- nvmf/common.sh@421 -- # return 0 00:09:45.174 07:55:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:45.174 07:55:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.174 07:55:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:45.174 07:55:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:45.174 07:55:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.174 07:55:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:45.174 07:55:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:45.174 07:55:50 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:45.174 07:55:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:45.174 07:55:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:45.174 07:55:50 -- common/autotest_common.sh@10 -- # set +x 00:09:45.174 07:55:50 -- nvmf/common.sh@469 -- # nvmfpid=71823 00:09:45.174 07:55:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:45.174 07:55:50 -- nvmf/common.sh@470 -- # waitforlisten 71823 00:09:45.174 07:55:50 -- common/autotest_common.sh@819 -- # '[' -z 71823 ']' 00:09:45.174 07:55:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.174 07:55:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:45.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.174 07:55:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.174 07:55:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:45.174 07:55:50 -- common/autotest_common.sh@10 -- # set +x 00:09:45.433 [2024-07-13 07:55:50.999584] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:45.433 [2024-07-13 07:55:50.999675] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.433 [2024-07-13 07:55:51.136986] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:45.433 [2024-07-13 07:55:51.175127] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:45.433 [2024-07-13 07:55:51.175447] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.433 [2024-07-13 07:55:51.175501] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.433 [2024-07-13 07:55:51.175684] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.433 [2024-07-13 07:55:51.176015] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.433 [2024-07-13 07:55:51.176102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:45.433 [2024-07-13 07:55:51.179803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:45.433 [2024-07-13 07:55:51.179843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.433 07:55:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:45.433 07:55:51 -- common/autotest_common.sh@852 -- # return 0 00:09:45.433 07:55:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:45.433 07:55:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:45.433 07:55:51 -- common/autotest_common.sh@10 -- # set +x 00:09:45.692 07:55:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:45.692 07:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:45.692 07:55:51 -- common/autotest_common.sh@10 -- # set +x 00:09:45.692 07:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:45.692 07:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:45.692 07:55:51 -- common/autotest_common.sh@10 -- # set +x 00:09:45.692 07:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:45.692 07:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:45.692 07:55:51 -- common/autotest_common.sh@10 -- # set +x 00:09:45.692 [2024-07-13 07:55:51.322751] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.692 07:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:45.692 07:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:45.692 07:55:51 -- common/autotest_common.sh@10 -- # set +x 00:09:45.692 Malloc0 00:09:45.692 07:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:45.692 07:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:45.692 07:55:51 -- common/autotest_common.sh@10 -- # set +x 00:09:45.692 07:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:45.692 07:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:45.692 07:55:51 -- common/autotest_common.sh@10 -- # set +x 00:09:45.692 07:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:45.692 07:55:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:45.692 07:55:51 -- common/autotest_common.sh@10 -- # set +x 00:09:45.692 [2024-07-13 07:55:51.380754] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.692 07:55:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=71850 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@30 -- # READ_PID=71852 00:09:45.692 07:55:51 -- nvmf/common.sh@520 -- # config=() 00:09:45.692 07:55:51 -- nvmf/common.sh@520 -- # local subsystem config 00:09:45.692 07:55:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:45.692 07:55:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:45.692 { 00:09:45.692 "params": { 00:09:45.692 "name": "Nvme$subsystem", 00:09:45.692 "trtype": "$TEST_TRANSPORT", 00:09:45.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.692 "adrfam": "ipv4", 00:09:45.692 "trsvcid": "$NVMF_PORT", 00:09:45.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.692 "hdgst": ${hdgst:-false}, 00:09:45.692 "ddgst": ${ddgst:-false} 00:09:45.692 }, 00:09:45.692 "method": "bdev_nvme_attach_controller" 00:09:45.692 } 00:09:45.692 EOF 00:09:45.692 )") 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:45.692 07:55:51 -- nvmf/common.sh@520 -- # config=() 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=71854 00:09:45.692 07:55:51 -- nvmf/common.sh@520 -- # local subsystem config 00:09:45.692 07:55:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:45.692 07:55:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:45.692 { 00:09:45.692 "params": { 00:09:45.692 "name": "Nvme$subsystem", 00:09:45.692 "trtype": "$TEST_TRANSPORT", 00:09:45.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.692 "adrfam": "ipv4", 00:09:45.692 "trsvcid": "$NVMF_PORT", 00:09:45.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.692 "hdgst": ${hdgst:-false}, 00:09:45.692 "ddgst": ${ddgst:-false} 00:09:45.692 }, 00:09:45.692 "method": "bdev_nvme_attach_controller" 00:09:45.692 } 00:09:45.692 EOF 00:09:45.692 )") 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=71857 00:09:45.692 07:55:51 -- nvmf/common.sh@542 -- # cat 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@35 -- # sync 00:09:45.692 07:55:51 -- nvmf/common.sh@542 -- # cat 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:45.692 07:55:51 -- nvmf/common.sh@520 -- # config=() 00:09:45.692 07:55:51 -- nvmf/common.sh@520 -- # local subsystem config 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:45.692 07:55:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:45.692 07:55:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:45.692 { 00:09:45.692 "params": { 00:09:45.692 "name": "Nvme$subsystem", 00:09:45.692 "trtype": "$TEST_TRANSPORT", 00:09:45.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.692 "adrfam": "ipv4", 00:09:45.692 "trsvcid": "$NVMF_PORT", 00:09:45.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.692 "hdgst": ${hdgst:-false}, 00:09:45.692 "ddgst": ${ddgst:-false} 00:09:45.692 }, 00:09:45.692 "method": "bdev_nvme_attach_controller" 00:09:45.692 } 00:09:45.692 EOF 00:09:45.692 )") 00:09:45.692 07:55:51 -- nvmf/common.sh@544 -- # jq . 00:09:45.692 07:55:51 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:45.692 07:55:51 -- nvmf/common.sh@520 -- # config=() 00:09:45.692 07:55:51 -- nvmf/common.sh@520 -- # local subsystem config 00:09:45.692 07:55:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:09:45.692 07:55:51 -- nvmf/common.sh@544 -- # jq . 00:09:45.692 07:55:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:09:45.692 { 00:09:45.692 "params": { 00:09:45.692 "name": "Nvme$subsystem", 00:09:45.692 "trtype": "$TEST_TRANSPORT", 00:09:45.692 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.692 "adrfam": "ipv4", 00:09:45.692 "trsvcid": "$NVMF_PORT", 00:09:45.692 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.692 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.692 "hdgst": ${hdgst:-false}, 00:09:45.692 "ddgst": ${ddgst:-false} 00:09:45.692 }, 00:09:45.692 "method": "bdev_nvme_attach_controller" 00:09:45.692 } 00:09:45.692 EOF 00:09:45.692 )") 00:09:45.692 07:55:51 -- nvmf/common.sh@545 -- # IFS=, 00:09:45.692 07:55:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:45.692 "params": { 00:09:45.692 "name": "Nvme1", 00:09:45.692 "trtype": "tcp", 00:09:45.692 "traddr": "10.0.0.2", 00:09:45.692 "adrfam": "ipv4", 00:09:45.692 "trsvcid": "4420", 00:09:45.692 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.692 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:45.692 "hdgst": false, 00:09:45.692 "ddgst": false 00:09:45.692 }, 00:09:45.692 "method": "bdev_nvme_attach_controller" 00:09:45.692 }' 00:09:45.692 07:55:51 -- nvmf/common.sh@542 -- # cat 00:09:45.692 07:55:51 -- nvmf/common.sh@545 -- # IFS=, 00:09:45.692 07:55:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:45.692 "params": { 00:09:45.692 "name": "Nvme1", 00:09:45.692 "trtype": "tcp", 00:09:45.692 "traddr": "10.0.0.2", 00:09:45.692 "adrfam": "ipv4", 00:09:45.692 "trsvcid": "4420", 00:09:45.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:45.693 "hdgst": false, 00:09:45.693 "ddgst": false 00:09:45.693 }, 00:09:45.693 "method": "bdev_nvme_attach_controller" 00:09:45.693 }' 00:09:45.693 07:55:51 -- nvmf/common.sh@542 -- # cat 00:09:45.693 07:55:51 -- nvmf/common.sh@544 -- # jq . 00:09:45.693 07:55:51 -- nvmf/common.sh@545 -- # IFS=, 00:09:45.693 07:55:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:45.693 "params": { 00:09:45.693 "name": "Nvme1", 00:09:45.693 "trtype": "tcp", 00:09:45.693 "traddr": "10.0.0.2", 00:09:45.693 "adrfam": "ipv4", 00:09:45.693 "trsvcid": "4420", 00:09:45.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:45.693 "hdgst": false, 00:09:45.693 "ddgst": false 00:09:45.693 }, 00:09:45.693 "method": "bdev_nvme_attach_controller" 00:09:45.693 }' 00:09:45.693 07:55:51 -- nvmf/common.sh@544 -- # jq . 00:09:45.693 07:55:51 -- nvmf/common.sh@545 -- # IFS=, 00:09:45.693 07:55:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:09:45.693 "params": { 00:09:45.693 "name": "Nvme1", 00:09:45.693 "trtype": "tcp", 00:09:45.693 "traddr": "10.0.0.2", 00:09:45.693 "adrfam": "ipv4", 00:09:45.693 "trsvcid": "4420", 00:09:45.693 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.693 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:45.693 "hdgst": false, 00:09:45.693 "ddgst": false 00:09:45.693 }, 00:09:45.693 "method": "bdev_nvme_attach_controller" 00:09:45.693 }' 00:09:45.693 [2024-07-13 07:55:51.438120] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:45.693 [2024-07-13 07:55:51.438203] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:45.693 [2024-07-13 07:55:51.452715] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:45.693 [2024-07-13 07:55:51.452804] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:45.693 07:55:51 -- target/bdev_io_wait.sh@37 -- # wait 71850 00:09:45.693 [2024-07-13 07:55:51.475524] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:45.693 [2024-07-13 07:55:51.475634] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:45.693 [2024-07-13 07:55:51.478821] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:45.693 [2024-07-13 07:55:51.478896] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:45.951 [2024-07-13 07:55:51.612966] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.951 [2024-07-13 07:55:51.637343] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:09:45.951 [2024-07-13 07:55:51.652432] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.951 [2024-07-13 07:55:51.672503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:09:45.951 [2024-07-13 07:55:51.691764] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.951 [2024-07-13 07:55:51.717929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:09:45.951 [2024-07-13 07:55:51.738871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.951 Running I/O for 1 seconds... 00:09:45.951 [2024-07-13 07:55:51.763321] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:09:46.210 Running I/O for 1 seconds... 00:09:46.210 Running I/O for 1 seconds... 00:09:46.210 Running I/O for 1 seconds... 00:09:47.169 00:09:47.169 Latency(us) 00:09:47.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.169 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:47.169 Nvme1n1 : 1.02 6561.25 25.63 0.00 0.00 19242.81 8281.37 33602.09 00:09:47.169 =================================================================================================================== 00:09:47.169 Total : 6561.25 25.63 0.00 0.00 19242.81 8281.37 33602.09 00:09:47.169 00:09:47.169 Latency(us) 00:09:47.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.169 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:47.169 Nvme1n1 : 1.01 9179.24 35.86 0.00 0.00 13870.75 9234.62 24069.59 00:09:47.169 =================================================================================================================== 00:09:47.169 Total : 9179.24 35.86 0.00 0.00 13870.75 9234.62 24069.59 00:09:47.169 00:09:47.169 Latency(us) 00:09:47.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.169 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:47.169 Nvme1n1 : 1.00 173189.32 676.52 0.00 0.00 736.13 327.68 1042.62 00:09:47.169 =================================================================================================================== 00:09:47.169 Total : 173189.32 676.52 0.00 0.00 736.13 327.68 1042.62 00:09:47.169 00:09:47.169 Latency(us) 00:09:47.169 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.169 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:47.169 Nvme1n1 : 1.00 6946.12 27.13 0.00 0.00 18376.52 4706.68 40989.79 00:09:47.169 =================================================================================================================== 00:09:47.169 Total : 6946.12 27.13 0.00 0.00 18376.52 4706.68 40989.79 00:09:47.169 07:55:52 -- target/bdev_io_wait.sh@38 -- # wait 71852 00:09:47.169 07:55:52 -- target/bdev_io_wait.sh@39 -- # wait 71854 00:09:47.169 07:55:52 -- target/bdev_io_wait.sh@40 -- # wait 71857 00:09:47.427 07:55:53 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:47.427 07:55:53 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:47.427 07:55:53 -- common/autotest_common.sh@10 -- # set +x 00:09:47.427 07:55:53 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:47.427 07:55:53 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:47.427 07:55:53 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:47.427 07:55:53 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:47.427 07:55:53 -- nvmf/common.sh@116 -- # sync 00:09:47.427 07:55:53 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:09:47.427 07:55:53 -- nvmf/common.sh@119 -- # set +e 00:09:47.427 07:55:53 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:47.427 07:55:53 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:09:47.427 rmmod nvme_tcp 00:09:47.427 rmmod nvme_fabrics 00:09:47.427 rmmod nvme_keyring 00:09:47.427 07:55:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:47.427 07:55:53 -- nvmf/common.sh@123 -- # set -e 00:09:47.427 07:55:53 -- nvmf/common.sh@124 -- # return 0 00:09:47.427 07:55:53 -- nvmf/common.sh@477 -- # '[' -n 71823 ']' 00:09:47.427 07:55:53 -- nvmf/common.sh@478 -- # killprocess 71823 00:09:47.427 07:55:53 -- common/autotest_common.sh@926 -- # '[' -z 71823 ']' 00:09:47.427 07:55:53 -- common/autotest_common.sh@930 -- # kill -0 71823 00:09:47.427 07:55:53 -- common/autotest_common.sh@931 -- # uname 00:09:47.427 07:55:53 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:09:47.427 07:55:53 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 71823 00:09:47.427 07:55:53 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:09:47.427 07:55:53 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:09:47.427 killing process with pid 71823 00:09:47.427 07:55:53 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 71823' 00:09:47.427 07:55:53 -- common/autotest_common.sh@945 -- # kill 71823 00:09:47.427 07:55:53 -- common/autotest_common.sh@950 -- # wait 71823 00:09:47.684 07:55:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:47.684 07:55:53 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:09:47.684 07:55:53 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:09:47.684 07:55:53 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.684 07:55:53 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:09:47.684 07:55:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.684 07:55:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.684 07:55:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.684 07:55:53 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:09:47.684 ************************************ 00:09:47.684 END TEST nvmf_bdev_io_wait 00:09:47.684 ************************************ 00:09:47.684 00:09:47.684 real 0m2.837s 00:09:47.684 user 0m12.648s 00:09:47.684 sys 0m1.838s 00:09:47.684 07:55:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.684 07:55:53 -- common/autotest_common.sh@10 -- # set +x 00:09:47.684 07:55:53 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:47.684 07:55:53 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:09:47.684 07:55:53 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:09:47.684 07:55:53 -- common/autotest_common.sh@10 -- # set +x 00:09:47.684 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:09:47.684 ************************************ 00:09:47.684 START TEST nvmf_queue_depth 00:09:47.684 ************************************ 00:09:47.684 07:55:53 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:47.684 * Looking for test storage... 00:09:47.684 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:09:47.684 07:55:53 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:09:47.684 07:55:53 -- nvmf/common.sh@7 -- # uname -s 00:09:47.684 07:55:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.684 07:55:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.684 07:55:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.684 07:55:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.684 07:55:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.684 07:55:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.684 07:55:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.684 07:55:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.684 07:55:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.684 07:55:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.684 07:55:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:09:47.684 07:55:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:09:47.684 07:55:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.684 07:55:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.684 07:55:53 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:09:47.684 07:55:53 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:47.684 07:55:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.684 07:55:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.684 07:55:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.684 07:55:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.685 07:55:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.685 07:55:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.685 07:55:53 -- paths/export.sh@5 -- # export PATH 00:09:47.685 07:55:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.685 07:55:53 -- nvmf/common.sh@46 -- # : 0 00:09:47.685 07:55:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:47.685 07:55:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:47.685 07:55:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:47.685 07:55:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.685 07:55:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.685 07:55:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:47.685 07:55:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:47.685 07:55:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:47.685 07:55:53 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:47.685 07:55:53 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:47.685 07:55:53 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:47.685 07:55:53 -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:47.685 07:55:53 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:09:47.685 07:55:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.685 07:55:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:47.685 07:55:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:47.685 07:55:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:47.685 07:55:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.685 07:55:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:47.685 07:55:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.685 07:55:53 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:09:47.685 07:55:53 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:09:47.685 07:55:53 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:09:47.685 07:55:53 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:09:47.685 07:55:53 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:09:47.685 07:55:53 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:09:47.685 07:55:53 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.685 07:55:53 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.685 07:55:53 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:09:47.685 07:55:53 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:09:47.685 07:55:53 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:09:47.685 07:55:53 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:09:47.685 07:55:53 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:09:47.685 07:55:53 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.685 07:55:53 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:09:47.685 07:55:53 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:09:47.685 07:55:53 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:09:47.685 07:55:53 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:09:47.685 07:55:53 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:09:47.942 07:55:53 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:09:47.942 Cannot find device "nvmf_tgt_br" 00:09:47.942 07:55:53 -- nvmf/common.sh@154 -- # true 00:09:47.942 07:55:53 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:09:47.942 Cannot find device "nvmf_tgt_br2" 00:09:47.942 07:55:53 -- nvmf/common.sh@155 -- # true 00:09:47.942 07:55:53 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:09:47.942 07:55:53 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:09:47.942 Cannot find device "nvmf_tgt_br" 00:09:47.942 07:55:53 -- nvmf/common.sh@157 -- # true 00:09:47.942 07:55:53 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:09:47.942 Cannot find device "nvmf_tgt_br2" 00:09:47.942 07:55:53 -- nvmf/common.sh@158 -- # true 00:09:47.942 07:55:53 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:09:47.942 07:55:53 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:09:47.942 07:55:53 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:09:47.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.942 07:55:53 -- nvmf/common.sh@161 -- # true 00:09:47.942 07:55:53 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:09:47.942 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:09:47.942 07:55:53 -- nvmf/common.sh@162 -- # true 00:09:47.942 07:55:53 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:09:47.942 07:55:53 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:09:47.942 07:55:53 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:09:47.942 07:55:53 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:09:47.942 07:55:53 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:09:47.942 07:55:53 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:09:47.942 07:55:53 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:09:47.942 07:55:53 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:09:47.942 07:55:53 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:09:47.942 07:55:53 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:09:47.942 07:55:53 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:09:47.942 07:55:53 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:09:47.942 07:55:53 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:09:48.200 07:55:53 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:09:48.201 07:55:53 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:09:48.201 07:55:53 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:09:48.201 07:55:53 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:09:48.201 07:55:53 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:09:48.201 07:55:53 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:09:48.201 07:55:53 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:09:48.201 07:55:53 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:09:48.201 07:55:53 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:09:48.201 07:55:53 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:09:48.201 07:55:53 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:09:48.201 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:48.201 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.047 ms 00:09:48.201 00:09:48.201 --- 10.0.0.2 ping statistics --- 00:09:48.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.201 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:09:48.201 07:55:53 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:09:48.201 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:09:48.201 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.042 ms 00:09:48.201 00:09:48.201 --- 10.0.0.3 ping statistics --- 00:09:48.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.201 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:09:48.201 07:55:53 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:09:48.201 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:48.201 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:09:48.201 00:09:48.201 --- 10.0.0.1 ping statistics --- 00:09:48.201 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:48.201 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:09:48.201 07:55:53 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:48.201 07:55:53 -- nvmf/common.sh@421 -- # return 0 00:09:48.201 07:55:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:48.201 07:55:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:48.201 07:55:53 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:09:48.201 07:55:53 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:09:48.201 07:55:53 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:48.201 07:55:53 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:09:48.201 07:55:53 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:09:48.201 07:55:53 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:48.201 07:55:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:48.201 07:55:53 -- common/autotest_common.sh@712 -- # xtrace_disable 00:09:48.201 07:55:53 -- common/autotest_common.sh@10 -- # set +x 00:09:48.201 07:55:53 -- nvmf/common.sh@469 -- # nvmfpid=72041 00:09:48.201 07:55:53 -- nvmf/common.sh@470 -- # waitforlisten 72041 00:09:48.201 07:55:53 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:48.201 07:55:53 -- common/autotest_common.sh@819 -- # '[' -z 72041 ']' 00:09:48.201 07:55:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.201 07:55:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:48.201 07:55:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.201 07:55:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:48.201 07:55:53 -- common/autotest_common.sh@10 -- # set +x 00:09:48.201 [2024-07-13 07:55:53.907645] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:48.201 [2024-07-13 07:55:53.907755] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:48.459 [2024-07-13 07:55:54.047514] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.459 [2024-07-13 07:55:54.078608] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:48.459 [2024-07-13 07:55:54.078770] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:48.459 [2024-07-13 07:55:54.078782] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:48.459 [2024-07-13 07:55:54.078805] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:48.459 [2024-07-13 07:55:54.078827] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:49.391 07:55:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:49.391 07:55:54 -- common/autotest_common.sh@852 -- # return 0 00:09:49.391 07:55:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:49.391 07:55:54 -- common/autotest_common.sh@718 -- # xtrace_disable 00:09:49.391 07:55:54 -- common/autotest_common.sh@10 -- # set +x 00:09:49.391 07:55:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:49.391 07:55:54 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:49.391 07:55:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:49.391 07:55:54 -- common/autotest_common.sh@10 -- # set +x 00:09:49.391 [2024-07-13 07:55:54.931667] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:49.391 07:55:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:49.391 07:55:54 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:49.391 07:55:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:49.391 07:55:54 -- common/autotest_common.sh@10 -- # set +x 00:09:49.391 Malloc0 00:09:49.391 07:55:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:49.391 07:55:54 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:49.391 07:55:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:49.392 07:55:54 -- common/autotest_common.sh@10 -- # set +x 00:09:49.392 07:55:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:49.392 07:55:54 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:49.392 07:55:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:49.392 07:55:54 -- common/autotest_common.sh@10 -- # set +x 00:09:49.392 07:55:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:49.392 07:55:54 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:49.392 07:55:54 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:49.392 07:55:54 -- common/autotest_common.sh@10 -- # set +x 00:09:49.392 [2024-07-13 07:55:54.983474] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:49.392 07:55:54 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:49.392 07:55:54 -- target/queue_depth.sh@30 -- # bdevperf_pid=72067 00:09:49.392 07:55:54 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:49.392 07:55:54 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:49.392 07:55:54 -- target/queue_depth.sh@33 -- # waitforlisten 72067 /var/tmp/bdevperf.sock 00:09:49.392 07:55:54 -- common/autotest_common.sh@819 -- # '[' -z 72067 ']' 00:09:49.392 07:55:54 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:49.392 07:55:54 -- common/autotest_common.sh@824 -- # local max_retries=100 00:09:49.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:49.392 07:55:54 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:49.392 07:55:54 -- common/autotest_common.sh@828 -- # xtrace_disable 00:09:49.392 07:55:54 -- common/autotest_common.sh@10 -- # set +x 00:09:49.392 [2024-07-13 07:55:55.031012] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:09:49.392 [2024-07-13 07:55:55.031095] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72067 ] 00:09:49.392 [2024-07-13 07:55:55.163685] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:49.392 [2024-07-13 07:55:55.195820] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.326 07:55:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:09:50.326 07:55:56 -- common/autotest_common.sh@852 -- # return 0 00:09:50.326 07:55:56 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:50.326 07:55:56 -- common/autotest_common.sh@551 -- # xtrace_disable 00:09:50.326 07:55:56 -- common/autotest_common.sh@10 -- # set +x 00:09:50.326 NVMe0n1 00:09:50.326 07:55:56 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:09:50.326 07:55:56 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:50.584 Running I/O for 10 seconds... 00:10:00.578 00:10:00.578 Latency(us) 00:10:00.578 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.578 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:00.578 Verification LBA range: start 0x0 length 0x4000 00:10:00.578 NVMe0n1 : 10.07 13893.36 54.27 0.00 0.00 73397.33 14417.92 59816.49 00:10:00.578 =================================================================================================================== 00:10:00.578 Total : 13893.36 54.27 0.00 0.00 73397.33 14417.92 59816.49 00:10:00.578 0 00:10:00.578 07:56:06 -- target/queue_depth.sh@39 -- # killprocess 72067 00:10:00.579 07:56:06 -- common/autotest_common.sh@926 -- # '[' -z 72067 ']' 00:10:00.579 07:56:06 -- common/autotest_common.sh@930 -- # kill -0 72067 00:10:00.579 07:56:06 -- common/autotest_common.sh@931 -- # uname 00:10:00.579 07:56:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:00.579 07:56:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72067 00:10:00.579 07:56:06 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:00.579 07:56:06 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:00.579 07:56:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72067' 00:10:00.579 killing process with pid 72067 00:10:00.579 07:56:06 -- common/autotest_common.sh@945 -- # kill 72067 00:10:00.579 Received shutdown signal, test time was about 10.000000 seconds 00:10:00.579 00:10:00.579 Latency(us) 00:10:00.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:00.579 =================================================================================================================== 00:10:00.579 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:00.579 07:56:06 -- common/autotest_common.sh@950 -- # wait 72067 00:10:00.837 07:56:06 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:00.837 07:56:06 -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:00.837 07:56:06 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:00.837 07:56:06 -- nvmf/common.sh@116 -- # sync 00:10:00.837 07:56:06 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:00.837 07:56:06 -- nvmf/common.sh@119 -- # set +e 00:10:00.837 07:56:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:00.837 07:56:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:00.837 rmmod nvme_tcp 00:10:00.837 rmmod nvme_fabrics 00:10:00.837 rmmod nvme_keyring 00:10:00.837 07:56:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:00.837 07:56:06 -- nvmf/common.sh@123 -- # set -e 00:10:00.837 07:56:06 -- nvmf/common.sh@124 -- # return 0 00:10:00.837 07:56:06 -- nvmf/common.sh@477 -- # '[' -n 72041 ']' 00:10:00.837 07:56:06 -- nvmf/common.sh@478 -- # killprocess 72041 00:10:00.837 07:56:06 -- common/autotest_common.sh@926 -- # '[' -z 72041 ']' 00:10:00.837 07:56:06 -- common/autotest_common.sh@930 -- # kill -0 72041 00:10:00.837 07:56:06 -- common/autotest_common.sh@931 -- # uname 00:10:00.837 07:56:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:00.837 07:56:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72041 00:10:00.837 07:56:06 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:10:00.837 killing process with pid 72041 00:10:00.837 07:56:06 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:10:00.837 07:56:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72041' 00:10:00.837 07:56:06 -- common/autotest_common.sh@945 -- # kill 72041 00:10:00.837 07:56:06 -- common/autotest_common.sh@950 -- # wait 72041 00:10:01.096 07:56:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:01.096 07:56:06 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:01.096 07:56:06 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:01.096 07:56:06 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:01.096 07:56:06 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:01.096 07:56:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.096 07:56:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:01.096 07:56:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.096 07:56:06 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:01.096 00:10:01.096 real 0m13.389s 00:10:01.096 user 0m23.468s 00:10:01.096 sys 0m1.868s 00:10:01.096 07:56:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:01.096 07:56:06 -- common/autotest_common.sh@10 -- # set +x 00:10:01.096 ************************************ 00:10:01.096 END TEST nvmf_queue_depth 00:10:01.096 ************************************ 00:10:01.096 07:56:06 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:01.096 07:56:06 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:01.096 07:56:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:01.096 07:56:06 -- common/autotest_common.sh@10 -- # set +x 00:10:01.096 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:10:01.096 ************************************ 00:10:01.096 START TEST nvmf_multipath 00:10:01.096 ************************************ 00:10:01.096 07:56:06 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:01.096 * Looking for test storage... 00:10:01.355 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:01.355 07:56:06 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:01.355 07:56:06 -- nvmf/common.sh@7 -- # uname -s 00:10:01.355 07:56:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:01.355 07:56:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:01.355 07:56:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:01.355 07:56:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:01.355 07:56:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:01.355 07:56:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:01.355 07:56:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:01.355 07:56:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:01.355 07:56:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:01.355 07:56:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:01.355 07:56:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:10:01.355 07:56:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:10:01.355 07:56:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:01.355 07:56:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:01.355 07:56:06 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:01.355 07:56:06 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:01.355 07:56:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:01.355 07:56:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:01.355 07:56:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:01.355 07:56:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.355 07:56:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.355 07:56:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.355 07:56:06 -- paths/export.sh@5 -- # export PATH 00:10:01.355 07:56:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:01.355 07:56:06 -- nvmf/common.sh@46 -- # : 0 00:10:01.355 07:56:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:01.356 07:56:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:01.356 07:56:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:01.356 07:56:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:01.356 07:56:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:01.356 07:56:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:01.356 07:56:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:01.356 07:56:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:01.356 07:56:06 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:01.356 07:56:06 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:01.356 07:56:06 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:01.356 07:56:06 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:01.356 07:56:06 -- target/multipath.sh@43 -- # nvmftestinit 00:10:01.356 07:56:06 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:01.356 07:56:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:01.356 07:56:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:01.356 07:56:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:01.356 07:56:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:01.356 07:56:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:01.356 07:56:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:01.356 07:56:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:01.356 07:56:06 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:01.356 07:56:06 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:01.356 07:56:06 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:01.356 07:56:06 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:01.356 07:56:06 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:01.356 07:56:06 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:01.356 07:56:06 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:01.356 07:56:06 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:01.356 07:56:06 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:01.356 07:56:06 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:01.356 07:56:06 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:01.356 07:56:06 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:01.356 07:56:06 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:01.356 07:56:06 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:01.356 07:56:06 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:01.356 07:56:06 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:01.356 07:56:06 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:01.356 07:56:06 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:01.356 07:56:06 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:01.356 07:56:06 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:01.356 Cannot find device "nvmf_tgt_br" 00:10:01.356 07:56:06 -- nvmf/common.sh@154 -- # true 00:10:01.356 07:56:06 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:01.356 Cannot find device "nvmf_tgt_br2" 00:10:01.356 07:56:06 -- nvmf/common.sh@155 -- # true 00:10:01.356 07:56:06 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:01.356 07:56:06 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:01.356 Cannot find device "nvmf_tgt_br" 00:10:01.356 07:56:07 -- nvmf/common.sh@157 -- # true 00:10:01.356 07:56:07 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:01.356 Cannot find device "nvmf_tgt_br2" 00:10:01.356 07:56:07 -- nvmf/common.sh@158 -- # true 00:10:01.356 07:56:07 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:01.356 07:56:07 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:01.356 07:56:07 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:01.356 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:01.356 07:56:07 -- nvmf/common.sh@161 -- # true 00:10:01.356 07:56:07 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:01.356 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:01.356 07:56:07 -- nvmf/common.sh@162 -- # true 00:10:01.356 07:56:07 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:01.356 07:56:07 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:01.356 07:56:07 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:01.356 07:56:07 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:01.356 07:56:07 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:01.356 07:56:07 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:01.356 07:56:07 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:01.356 07:56:07 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:01.356 07:56:07 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:01.356 07:56:07 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:01.615 07:56:07 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:01.615 07:56:07 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:01.615 07:56:07 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:01.615 07:56:07 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:01.615 07:56:07 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:01.615 07:56:07 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:01.615 07:56:07 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:01.615 07:56:07 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:01.615 07:56:07 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:01.615 07:56:07 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:01.615 07:56:07 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:01.615 07:56:07 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:01.615 07:56:07 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:01.615 07:56:07 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:01.615 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:01.615 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:10:01.615 00:10:01.615 --- 10.0.0.2 ping statistics --- 00:10:01.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.615 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:01.615 07:56:07 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:01.615 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:01.615 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:10:01.615 00:10:01.615 --- 10.0.0.3 ping statistics --- 00:10:01.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.615 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:10:01.615 07:56:07 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:01.615 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:01.615 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:10:01.615 00:10:01.615 --- 10.0.0.1 ping statistics --- 00:10:01.615 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:01.615 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:10:01.615 07:56:07 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:01.615 07:56:07 -- nvmf/common.sh@421 -- # return 0 00:10:01.615 07:56:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:01.615 07:56:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:01.615 07:56:07 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:01.615 07:56:07 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:01.615 07:56:07 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:01.615 07:56:07 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:01.615 07:56:07 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:01.615 07:56:07 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:10:01.615 07:56:07 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:10:01.615 07:56:07 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:10:01.615 07:56:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:01.615 07:56:07 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:01.615 07:56:07 -- common/autotest_common.sh@10 -- # set +x 00:10:01.615 07:56:07 -- nvmf/common.sh@469 -- # nvmfpid=72314 00:10:01.615 07:56:07 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:01.615 07:56:07 -- nvmf/common.sh@470 -- # waitforlisten 72314 00:10:01.615 07:56:07 -- common/autotest_common.sh@819 -- # '[' -z 72314 ']' 00:10:01.615 07:56:07 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:01.615 07:56:07 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:01.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:01.615 07:56:07 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:01.615 07:56:07 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:01.615 07:56:07 -- common/autotest_common.sh@10 -- # set +x 00:10:01.615 [2024-07-13 07:56:07.355590] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:01.615 [2024-07-13 07:56:07.355673] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:01.873 [2024-07-13 07:56:07.497249] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:01.873 [2024-07-13 07:56:07.541189] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:01.873 [2024-07-13 07:56:07.541351] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:01.873 [2024-07-13 07:56:07.541368] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:01.873 [2024-07-13 07:56:07.541378] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:01.873 [2024-07-13 07:56:07.541529] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:01.873 [2024-07-13 07:56:07.542299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:01.873 [2024-07-13 07:56:07.542414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:01.873 [2024-07-13 07:56:07.542403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:02.808 07:56:08 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:02.809 07:56:08 -- common/autotest_common.sh@852 -- # return 0 00:10:02.809 07:56:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:02.809 07:56:08 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:02.809 07:56:08 -- common/autotest_common.sh@10 -- # set +x 00:10:02.809 07:56:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:02.809 07:56:08 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:02.809 [2024-07-13 07:56:08.596174] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.068 07:56:08 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:10:03.068 Malloc0 00:10:03.068 07:56:08 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:10:03.326 07:56:09 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.584 07:56:09 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.842 [2024-07-13 07:56:09.568625] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.842 07:56:09 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:10:04.100 [2024-07-13 07:56:09.784824] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:04.100 07:56:09 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:10:04.358 07:56:09 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:10:04.358 07:56:10 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:10:04.358 07:56:10 -- common/autotest_common.sh@1177 -- # local i=0 00:10:04.358 07:56:10 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:10:04.358 07:56:10 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:10:04.358 07:56:10 -- common/autotest_common.sh@1184 -- # sleep 2 00:10:06.889 07:56:12 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:10:06.889 07:56:12 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:10:06.889 07:56:12 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:10:06.889 07:56:12 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:10:06.889 07:56:12 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:10:06.889 07:56:12 -- common/autotest_common.sh@1187 -- # return 0 00:10:06.889 07:56:12 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:10:06.889 07:56:12 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:10:06.889 07:56:12 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:10:06.889 07:56:12 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:06.889 07:56:12 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:10:06.889 07:56:12 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:10:06.889 07:56:12 -- target/multipath.sh@38 -- # return 0 00:10:06.889 07:56:12 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:10:06.889 07:56:12 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:10:06.889 07:56:12 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:10:06.889 07:56:12 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:10:06.889 07:56:12 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:10:06.889 07:56:12 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:10:06.889 07:56:12 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:10:06.889 07:56:12 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:06.889 07:56:12 -- target/multipath.sh@22 -- # local timeout=20 00:10:06.889 07:56:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:06.889 07:56:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:06.889 07:56:12 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:06.889 07:56:12 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:10:06.889 07:56:12 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:06.889 07:56:12 -- target/multipath.sh@22 -- # local timeout=20 00:10:06.889 07:56:12 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:06.889 07:56:12 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:06.889 07:56:12 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:06.889 07:56:12 -- target/multipath.sh@85 -- # echo numa 00:10:06.889 07:56:12 -- target/multipath.sh@88 -- # fio_pid=72375 00:10:06.889 07:56:12 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:06.889 07:56:12 -- target/multipath.sh@90 -- # sleep 1 00:10:06.889 [global] 00:10:06.889 thread=1 00:10:06.889 invalidate=1 00:10:06.889 rw=randrw 00:10:06.889 time_based=1 00:10:06.889 runtime=6 00:10:06.889 ioengine=libaio 00:10:06.889 direct=1 00:10:06.889 bs=4096 00:10:06.889 iodepth=128 00:10:06.889 norandommap=0 00:10:06.889 numjobs=1 00:10:06.889 00:10:06.889 verify_dump=1 00:10:06.889 verify_backlog=512 00:10:06.889 verify_state_save=0 00:10:06.889 do_verify=1 00:10:06.889 verify=crc32c-intel 00:10:06.889 [job0] 00:10:06.889 filename=/dev/nvme0n1 00:10:06.889 Could not set queue depth (nvme0n1) 00:10:06.889 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:06.889 fio-3.35 00:10:06.889 Starting 1 thread 00:10:07.454 07:56:13 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:07.712 07:56:13 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:07.970 07:56:13 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:10:07.970 07:56:13 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:07.970 07:56:13 -- target/multipath.sh@22 -- # local timeout=20 00:10:07.970 07:56:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:07.970 07:56:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:07.970 07:56:13 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:07.970 07:56:13 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:10:07.970 07:56:13 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:07.970 07:56:13 -- target/multipath.sh@22 -- # local timeout=20 00:10:07.970 07:56:13 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:07.970 07:56:13 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:07.970 07:56:13 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:07.970 07:56:13 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:08.228 07:56:13 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:08.486 07:56:14 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:10:08.486 07:56:14 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:08.486 07:56:14 -- target/multipath.sh@22 -- # local timeout=20 00:10:08.486 07:56:14 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:08.486 07:56:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:08.486 07:56:14 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:08.486 07:56:14 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:10:08.486 07:56:14 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:08.486 07:56:14 -- target/multipath.sh@22 -- # local timeout=20 00:10:08.486 07:56:14 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:08.486 07:56:14 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:08.486 07:56:14 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:08.486 07:56:14 -- target/multipath.sh@104 -- # wait 72375 00:10:12.668 00:10:12.668 job0: (groupid=0, jobs=1): err= 0: pid=72396: Sat Jul 13 07:56:18 2024 00:10:12.668 read: IOPS=10.9k, BW=42.5MiB/s (44.6MB/s)(256MiB/6007msec) 00:10:12.668 slat (usec): min=4, max=5761, avg=53.37, stdev=222.05 00:10:12.668 clat (usec): min=1360, max=14380, avg=7896.01, stdev=1376.18 00:10:12.668 lat (usec): min=1388, max=14390, avg=7949.39, stdev=1381.07 00:10:12.668 clat percentiles (usec): 00:10:12.668 | 1.00th=[ 4113], 5.00th=[ 5997], 10.00th=[ 6718], 20.00th=[ 7177], 00:10:12.668 | 30.00th=[ 7373], 40.00th=[ 7570], 50.00th=[ 7767], 60.00th=[ 8029], 00:10:12.668 | 70.00th=[ 8225], 80.00th=[ 8586], 90.00th=[ 9110], 95.00th=[11076], 00:10:12.668 | 99.00th=[12256], 99.50th=[12518], 99.90th=[13042], 99.95th=[13304], 00:10:12.668 | 99.99th=[14222] 00:10:12.668 bw ( KiB/s): min= 9952, max=29760, per=53.41%, avg=23264.64, stdev=6388.92, samples=11 00:10:12.668 iops : min= 2488, max= 7440, avg=5816.09, stdev=1597.21, samples=11 00:10:12.668 write: IOPS=6483, BW=25.3MiB/s (26.6MB/s)(137MiB/5426msec); 0 zone resets 00:10:12.668 slat (usec): min=15, max=1918, avg=61.88, stdev=149.51 00:10:12.668 clat (usec): min=1083, max=13872, avg=6979.63, stdev=1234.50 00:10:12.668 lat (usec): min=1108, max=14200, avg=7041.51, stdev=1239.21 00:10:12.668 clat percentiles (usec): 00:10:12.668 | 1.00th=[ 3228], 5.00th=[ 4080], 10.00th=[ 5538], 20.00th=[ 6521], 00:10:12.668 | 30.00th=[ 6783], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7308], 00:10:12.668 | 70.00th=[ 7504], 80.00th=[ 7701], 90.00th=[ 7963], 95.00th=[ 8225], 00:10:12.668 | 99.00th=[10814], 99.50th=[11469], 99.90th=[12649], 99.95th=[13042], 00:10:12.668 | 99.99th=[13698] 00:10:12.668 bw ( KiB/s): min=10312, max=29112, per=89.76%, avg=23276.00, stdev=6097.57, samples=11 00:10:12.668 iops : min= 2578, max= 7278, avg=5819.00, stdev=1524.39, samples=11 00:10:12.668 lat (msec) : 2=0.04%, 4=2.12%, 10=92.66%, 20=5.18% 00:10:12.668 cpu : usr=5.66%, sys=22.26%, ctx=5819, majf=0, minf=88 00:10:12.668 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:12.668 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:12.668 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:12.668 issued rwts: total=65418,35177,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:12.668 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:12.668 00:10:12.668 Run status group 0 (all jobs): 00:10:12.668 READ: bw=42.5MiB/s (44.6MB/s), 42.5MiB/s-42.5MiB/s (44.6MB/s-44.6MB/s), io=256MiB (268MB), run=6007-6007msec 00:10:12.668 WRITE: bw=25.3MiB/s (26.6MB/s), 25.3MiB/s-25.3MiB/s (26.6MB/s-26.6MB/s), io=137MiB (144MB), run=5426-5426msec 00:10:12.668 00:10:12.668 Disk stats (read/write): 00:10:12.668 nvme0n1: ios=64474/34515, merge=0/0, ticks=486512/225503, in_queue=712015, util=98.62% 00:10:12.668 07:56:18 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:10:12.925 07:56:18 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:10:13.183 07:56:18 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:10:13.183 07:56:18 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:10:13.183 07:56:18 -- target/multipath.sh@22 -- # local timeout=20 00:10:13.183 07:56:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:13.183 07:56:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:13.183 07:56:18 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:13.183 07:56:18 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:10:13.183 07:56:18 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:10:13.183 07:56:18 -- target/multipath.sh@22 -- # local timeout=20 00:10:13.183 07:56:18 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:13.183 07:56:18 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:13.183 07:56:18 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:10:13.183 07:56:18 -- target/multipath.sh@113 -- # echo round-robin 00:10:13.183 07:56:18 -- target/multipath.sh@116 -- # fio_pid=72436 00:10:13.183 07:56:18 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:10:13.183 07:56:18 -- target/multipath.sh@118 -- # sleep 1 00:10:13.183 [global] 00:10:13.183 thread=1 00:10:13.183 invalidate=1 00:10:13.183 rw=randrw 00:10:13.183 time_based=1 00:10:13.183 runtime=6 00:10:13.183 ioengine=libaio 00:10:13.183 direct=1 00:10:13.183 bs=4096 00:10:13.183 iodepth=128 00:10:13.183 norandommap=0 00:10:13.183 numjobs=1 00:10:13.183 00:10:13.183 verify_dump=1 00:10:13.183 verify_backlog=512 00:10:13.183 verify_state_save=0 00:10:13.183 do_verify=1 00:10:13.183 verify=crc32c-intel 00:10:13.183 [job0] 00:10:13.183 filename=/dev/nvme0n1 00:10:13.440 Could not set queue depth (nvme0n1) 00:10:13.440 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:13.440 fio-3.35 00:10:13.440 Starting 1 thread 00:10:14.372 07:56:19 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:10:14.631 07:56:20 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:10:14.631 07:56:20 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:10:14.631 07:56:20 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:10:14.631 07:56:20 -- target/multipath.sh@22 -- # local timeout=20 00:10:14.631 07:56:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:14.631 07:56:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:14.631 07:56:20 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:14.631 07:56:20 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:10:14.631 07:56:20 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:10:14.631 07:56:20 -- target/multipath.sh@22 -- # local timeout=20 00:10:14.631 07:56:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:14.631 07:56:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:14.631 07:56:20 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:14.631 07:56:20 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:10:14.888 07:56:20 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:10:15.147 07:56:20 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:10:15.147 07:56:20 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:10:15.147 07:56:20 -- target/multipath.sh@22 -- # local timeout=20 00:10:15.147 07:56:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:10:15.147 07:56:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:10:15.147 07:56:20 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:10:15.147 07:56:20 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:10:15.147 07:56:20 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:10:15.147 07:56:20 -- target/multipath.sh@22 -- # local timeout=20 00:10:15.147 07:56:20 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:10:15.147 07:56:20 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:10:15.147 07:56:20 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:10:15.147 07:56:20 -- target/multipath.sh@132 -- # wait 72436 00:10:20.439 00:10:20.439 job0: (groupid=0, jobs=1): err= 0: pid=72457: Sat Jul 13 07:56:25 2024 00:10:20.439 read: IOPS=11.9k, BW=46.6MiB/s (48.9MB/s)(280MiB/6002msec) 00:10:20.439 slat (usec): min=4, max=5518, avg=41.37, stdev=187.01 00:10:20.439 clat (usec): min=416, max=13967, avg=7261.50, stdev=1748.86 00:10:20.439 lat (usec): min=456, max=13981, avg=7302.88, stdev=1762.99 00:10:20.439 clat percentiles (usec): 00:10:20.439 | 1.00th=[ 2999], 5.00th=[ 4015], 10.00th=[ 4817], 20.00th=[ 5800], 00:10:20.439 | 30.00th=[ 6783], 40.00th=[ 7242], 50.00th=[ 7504], 60.00th=[ 7767], 00:10:20.439 | 70.00th=[ 8094], 80.00th=[ 8455], 90.00th=[ 8848], 95.00th=[ 9896], 00:10:20.439 | 99.00th=[11994], 99.50th=[12256], 99.90th=[12649], 99.95th=[12911], 00:10:20.439 | 99.99th=[13042] 00:10:20.439 bw ( KiB/s): min=12576, max=36896, per=55.11%, avg=26306.91, stdev=6897.65, samples=11 00:10:20.439 iops : min= 3144, max= 9224, avg=6576.73, stdev=1724.41, samples=11 00:10:20.439 write: IOPS=7075, BW=27.6MiB/s (29.0MB/s)(149MiB/5386msec); 0 zone resets 00:10:20.439 slat (usec): min=14, max=1829, avg=54.66, stdev=133.37 00:10:20.439 clat (usec): min=616, max=12965, avg=6344.32, stdev=1644.80 00:10:20.439 lat (usec): min=657, max=12989, avg=6398.99, stdev=1658.40 00:10:20.439 clat percentiles (usec): 00:10:20.439 | 1.00th=[ 2606], 5.00th=[ 3294], 10.00th=[ 3752], 20.00th=[ 4490], 00:10:20.439 | 30.00th=[ 5669], 40.00th=[ 6587], 50.00th=[ 6915], 60.00th=[ 7177], 00:10:20.439 | 70.00th=[ 7373], 80.00th=[ 7635], 90.00th=[ 7898], 95.00th=[ 8160], 00:10:20.439 | 99.00th=[ 9765], 99.50th=[10814], 99.90th=[11600], 99.95th=[12125], 00:10:20.439 | 99.99th=[12911] 00:10:20.439 bw ( KiB/s): min=12792, max=36864, per=92.75%, avg=26250.18, stdev=6771.79, samples=11 00:10:20.439 iops : min= 3198, max= 9216, avg=6562.55, stdev=1692.95, samples=11 00:10:20.439 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:10:20.439 lat (msec) : 2=0.17%, 4=7.49%, 10=88.83%, 20=3.49% 00:10:20.439 cpu : usr=5.83%, sys=23.81%, ctx=6056, majf=0, minf=157 00:10:20.439 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:10:20.439 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:20.439 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:20.439 issued rwts: total=71628,38110,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:20.439 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:20.439 00:10:20.439 Run status group 0 (all jobs): 00:10:20.439 READ: bw=46.6MiB/s (48.9MB/s), 46.6MiB/s-46.6MiB/s (48.9MB/s-48.9MB/s), io=280MiB (293MB), run=6002-6002msec 00:10:20.439 WRITE: bw=27.6MiB/s (29.0MB/s), 27.6MiB/s-27.6MiB/s (29.0MB/s-29.0MB/s), io=149MiB (156MB), run=5386-5386msec 00:10:20.439 00:10:20.439 Disk stats (read/write): 00:10:20.439 nvme0n1: ios=70114/38110, merge=0/0, ticks=484424/225369, in_queue=709793, util=98.63% 00:10:20.439 07:56:25 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:20.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:20.439 07:56:25 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:20.439 07:56:25 -- common/autotest_common.sh@1198 -- # local i=0 00:10:20.439 07:56:25 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.439 07:56:25 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:10:20.439 07:56:25 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:20.439 07:56:25 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:20.439 07:56:25 -- common/autotest_common.sh@1210 -- # return 0 00:10:20.439 07:56:25 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:20.439 07:56:25 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:10:20.439 07:56:25 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:10:20.439 07:56:25 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:10:20.439 07:56:25 -- target/multipath.sh@144 -- # nvmftestfini 00:10:20.439 07:56:25 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:20.439 07:56:25 -- nvmf/common.sh@116 -- # sync 00:10:20.439 07:56:25 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:20.439 07:56:25 -- nvmf/common.sh@119 -- # set +e 00:10:20.439 07:56:25 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:20.439 07:56:25 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:20.439 rmmod nvme_tcp 00:10:20.439 rmmod nvme_fabrics 00:10:20.439 rmmod nvme_keyring 00:10:20.439 07:56:25 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:20.439 07:56:25 -- nvmf/common.sh@123 -- # set -e 00:10:20.439 07:56:25 -- nvmf/common.sh@124 -- # return 0 00:10:20.439 07:56:25 -- nvmf/common.sh@477 -- # '[' -n 72314 ']' 00:10:20.439 07:56:25 -- nvmf/common.sh@478 -- # killprocess 72314 00:10:20.439 07:56:25 -- common/autotest_common.sh@926 -- # '[' -z 72314 ']' 00:10:20.439 07:56:25 -- common/autotest_common.sh@930 -- # kill -0 72314 00:10:20.439 07:56:25 -- common/autotest_common.sh@931 -- # uname 00:10:20.439 07:56:25 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:20.439 07:56:25 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72314 00:10:20.439 killing process with pid 72314 00:10:20.439 07:56:25 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:20.439 07:56:25 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:20.439 07:56:25 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72314' 00:10:20.439 07:56:25 -- common/autotest_common.sh@945 -- # kill 72314 00:10:20.439 07:56:25 -- common/autotest_common.sh@950 -- # wait 72314 00:10:20.439 07:56:25 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:20.439 07:56:25 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:20.439 07:56:25 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:20.439 07:56:25 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:20.439 07:56:25 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:20.439 07:56:25 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.439 07:56:25 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.439 07:56:25 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.439 07:56:25 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:20.439 00:10:20.439 real 0m19.057s 00:10:20.439 user 1m10.989s 00:10:20.439 sys 0m10.139s 00:10:20.439 07:56:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:20.439 07:56:25 -- common/autotest_common.sh@10 -- # set +x 00:10:20.439 ************************************ 00:10:20.439 END TEST nvmf_multipath 00:10:20.439 ************************************ 00:10:20.440 07:56:25 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:20.440 07:56:25 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:20.440 07:56:25 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:20.440 07:56:25 -- common/autotest_common.sh@10 -- # set +x 00:10:20.440 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:10:20.440 ************************************ 00:10:20.440 START TEST nvmf_zcopy 00:10:20.440 ************************************ 00:10:20.440 07:56:25 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:20.440 * Looking for test storage... 00:10:20.440 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:20.440 07:56:26 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:20.440 07:56:26 -- nvmf/common.sh@7 -- # uname -s 00:10:20.440 07:56:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.440 07:56:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.440 07:56:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.440 07:56:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.440 07:56:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.440 07:56:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.440 07:56:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.440 07:56:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.440 07:56:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.440 07:56:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.440 07:56:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:10:20.440 07:56:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:10:20.440 07:56:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.440 07:56:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.440 07:56:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:20.440 07:56:26 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:20.440 07:56:26 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.440 07:56:26 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.440 07:56:26 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.440 07:56:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.440 07:56:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.440 07:56:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.440 07:56:26 -- paths/export.sh@5 -- # export PATH 00:10:20.440 07:56:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.440 07:56:26 -- nvmf/common.sh@46 -- # : 0 00:10:20.440 07:56:26 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:20.440 07:56:26 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:20.440 07:56:26 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:20.440 07:56:26 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.440 07:56:26 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.440 07:56:26 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:20.440 07:56:26 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:20.440 07:56:26 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:20.440 07:56:26 -- target/zcopy.sh@12 -- # nvmftestinit 00:10:20.440 07:56:26 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:20.440 07:56:26 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.440 07:56:26 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:20.440 07:56:26 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:20.440 07:56:26 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:20.440 07:56:26 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.440 07:56:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:20.440 07:56:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.440 07:56:26 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:20.440 07:56:26 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:20.440 07:56:26 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:20.440 07:56:26 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:20.440 07:56:26 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:20.440 07:56:26 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:20.440 07:56:26 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.440 07:56:26 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:20.440 07:56:26 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:20.440 07:56:26 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:20.440 07:56:26 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:20.440 07:56:26 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:20.440 07:56:26 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:20.440 07:56:26 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.440 07:56:26 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:20.440 07:56:26 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:20.440 07:56:26 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:20.440 07:56:26 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:20.440 07:56:26 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:20.440 07:56:26 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:20.440 Cannot find device "nvmf_tgt_br" 00:10:20.440 07:56:26 -- nvmf/common.sh@154 -- # true 00:10:20.440 07:56:26 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:20.440 Cannot find device "nvmf_tgt_br2" 00:10:20.440 07:56:26 -- nvmf/common.sh@155 -- # true 00:10:20.440 07:56:26 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:20.440 07:56:26 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:20.440 Cannot find device "nvmf_tgt_br" 00:10:20.440 07:56:26 -- nvmf/common.sh@157 -- # true 00:10:20.440 07:56:26 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:20.440 Cannot find device "nvmf_tgt_br2" 00:10:20.440 07:56:26 -- nvmf/common.sh@158 -- # true 00:10:20.440 07:56:26 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:20.440 07:56:26 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:20.440 07:56:26 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:20.440 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.440 07:56:26 -- nvmf/common.sh@161 -- # true 00:10:20.440 07:56:26 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:20.440 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.440 07:56:26 -- nvmf/common.sh@162 -- # true 00:10:20.440 07:56:26 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:20.440 07:56:26 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:20.440 07:56:26 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:20.440 07:56:26 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:20.440 07:56:26 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:20.699 07:56:26 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:20.699 07:56:26 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:20.699 07:56:26 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:20.699 07:56:26 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:20.699 07:56:26 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:20.699 07:56:26 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:20.699 07:56:26 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:20.699 07:56:26 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:20.699 07:56:26 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:20.699 07:56:26 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:20.699 07:56:26 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:20.699 07:56:26 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:20.699 07:56:26 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:20.699 07:56:26 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:20.699 07:56:26 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:20.699 07:56:26 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:20.699 07:56:26 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:20.699 07:56:26 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:20.699 07:56:26 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:20.699 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.699 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:10:20.699 00:10:20.699 --- 10.0.0.2 ping statistics --- 00:10:20.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.699 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:10:20.699 07:56:26 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:20.699 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:20.699 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:10:20.699 00:10:20.699 --- 10.0.0.3 ping statistics --- 00:10:20.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.699 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:10:20.699 07:56:26 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:20.699 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.699 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:10:20.699 00:10:20.699 --- 10.0.0.1 ping statistics --- 00:10:20.699 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.699 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:10:20.699 07:56:26 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.699 07:56:26 -- nvmf/common.sh@421 -- # return 0 00:10:20.699 07:56:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:20.699 07:56:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.699 07:56:26 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:20.699 07:56:26 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:20.699 07:56:26 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.699 07:56:26 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:20.699 07:56:26 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:20.699 07:56:26 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:20.699 07:56:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:20.699 07:56:26 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:20.699 07:56:26 -- common/autotest_common.sh@10 -- # set +x 00:10:20.699 07:56:26 -- nvmf/common.sh@469 -- # nvmfpid=72660 00:10:20.699 07:56:26 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:20.699 07:56:26 -- nvmf/common.sh@470 -- # waitforlisten 72660 00:10:20.699 07:56:26 -- common/autotest_common.sh@819 -- # '[' -z 72660 ']' 00:10:20.699 07:56:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.699 07:56:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:20.699 07:56:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.699 07:56:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:20.699 07:56:26 -- common/autotest_common.sh@10 -- # set +x 00:10:20.699 [2024-07-13 07:56:26.472320] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:20.699 [2024-07-13 07:56:26.472418] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.958 [2024-07-13 07:56:26.606419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.958 [2024-07-13 07:56:26.638385] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:20.958 [2024-07-13 07:56:26.638548] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:20.958 [2024-07-13 07:56:26.638561] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:20.958 [2024-07-13 07:56:26.638569] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:20.958 [2024-07-13 07:56:26.638603] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.525 07:56:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:21.525 07:56:27 -- common/autotest_common.sh@852 -- # return 0 00:10:21.525 07:56:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:21.525 07:56:27 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:21.525 07:56:27 -- common/autotest_common.sh@10 -- # set +x 00:10:21.784 07:56:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:21.784 07:56:27 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:21.784 07:56:27 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:21.784 07:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:21.784 07:56:27 -- common/autotest_common.sh@10 -- # set +x 00:10:21.784 [2024-07-13 07:56:27.370040] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:21.784 07:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:21.784 07:56:27 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:21.784 07:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:21.784 07:56:27 -- common/autotest_common.sh@10 -- # set +x 00:10:21.784 07:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:21.784 07:56:27 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:21.784 07:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:21.784 07:56:27 -- common/autotest_common.sh@10 -- # set +x 00:10:21.784 [2024-07-13 07:56:27.386160] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:21.784 07:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:21.784 07:56:27 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:21.784 07:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:21.784 07:56:27 -- common/autotest_common.sh@10 -- # set +x 00:10:21.784 07:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:21.784 07:56:27 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:21.784 07:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:21.784 07:56:27 -- common/autotest_common.sh@10 -- # set +x 00:10:21.784 malloc0 00:10:21.784 07:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:21.784 07:56:27 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:21.784 07:56:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:21.784 07:56:27 -- common/autotest_common.sh@10 -- # set +x 00:10:21.784 07:56:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:21.784 07:56:27 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:21.784 07:56:27 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:21.784 07:56:27 -- nvmf/common.sh@520 -- # config=() 00:10:21.784 07:56:27 -- nvmf/common.sh@520 -- # local subsystem config 00:10:21.784 07:56:27 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:21.784 07:56:27 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:21.784 { 00:10:21.784 "params": { 00:10:21.784 "name": "Nvme$subsystem", 00:10:21.784 "trtype": "$TEST_TRANSPORT", 00:10:21.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:21.784 "adrfam": "ipv4", 00:10:21.784 "trsvcid": "$NVMF_PORT", 00:10:21.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:21.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:21.784 "hdgst": ${hdgst:-false}, 00:10:21.784 "ddgst": ${ddgst:-false} 00:10:21.784 }, 00:10:21.784 "method": "bdev_nvme_attach_controller" 00:10:21.784 } 00:10:21.784 EOF 00:10:21.784 )") 00:10:21.784 07:56:27 -- nvmf/common.sh@542 -- # cat 00:10:21.784 07:56:27 -- nvmf/common.sh@544 -- # jq . 00:10:21.784 07:56:27 -- nvmf/common.sh@545 -- # IFS=, 00:10:21.784 07:56:27 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:21.784 "params": { 00:10:21.784 "name": "Nvme1", 00:10:21.784 "trtype": "tcp", 00:10:21.784 "traddr": "10.0.0.2", 00:10:21.784 "adrfam": "ipv4", 00:10:21.784 "trsvcid": "4420", 00:10:21.784 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:21.784 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:21.784 "hdgst": false, 00:10:21.784 "ddgst": false 00:10:21.784 }, 00:10:21.784 "method": "bdev_nvme_attach_controller" 00:10:21.784 }' 00:10:21.784 [2024-07-13 07:56:27.468423] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:21.784 [2024-07-13 07:56:27.468515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72687 ] 00:10:22.043 [2024-07-13 07:56:27.609516] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:22.043 [2024-07-13 07:56:27.648764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.043 Running I/O for 10 seconds... 00:10:32.016 00:10:32.016 Latency(us) 00:10:32.016 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:32.016 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:32.016 Verification LBA range: start 0x0 length 0x1000 00:10:32.016 Nvme1n1 : 10.01 9913.42 77.45 0.00 0.00 12877.30 1995.87 21924.77 00:10:32.016 =================================================================================================================== 00:10:32.016 Total : 9913.42 77.45 0.00 0.00 12877.30 1995.87 21924.77 00:10:32.275 07:56:37 -- target/zcopy.sh@39 -- # perfpid=72743 00:10:32.275 07:56:37 -- target/zcopy.sh@41 -- # xtrace_disable 00:10:32.275 07:56:37 -- common/autotest_common.sh@10 -- # set +x 00:10:32.275 07:56:37 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:32.275 07:56:37 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:32.275 07:56:37 -- nvmf/common.sh@520 -- # config=() 00:10:32.275 07:56:37 -- nvmf/common.sh@520 -- # local subsystem config 00:10:32.275 07:56:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:10:32.275 07:56:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:10:32.275 { 00:10:32.275 "params": { 00:10:32.275 "name": "Nvme$subsystem", 00:10:32.275 "trtype": "$TEST_TRANSPORT", 00:10:32.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:32.275 "adrfam": "ipv4", 00:10:32.275 "trsvcid": "$NVMF_PORT", 00:10:32.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:32.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:32.275 "hdgst": ${hdgst:-false}, 00:10:32.275 "ddgst": ${ddgst:-false} 00:10:32.275 }, 00:10:32.275 "method": "bdev_nvme_attach_controller" 00:10:32.275 } 00:10:32.275 EOF 00:10:32.275 )") 00:10:32.275 07:56:37 -- nvmf/common.sh@542 -- # cat 00:10:32.275 [2024-07-13 07:56:37.948429] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.275 [2024-07-13 07:56:37.948471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.275 07:56:37 -- nvmf/common.sh@544 -- # jq . 00:10:32.275 07:56:37 -- nvmf/common.sh@545 -- # IFS=, 00:10:32.275 07:56:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:10:32.275 "params": { 00:10:32.275 "name": "Nvme1", 00:10:32.275 "trtype": "tcp", 00:10:32.275 "traddr": "10.0.0.2", 00:10:32.275 "adrfam": "ipv4", 00:10:32.275 "trsvcid": "4420", 00:10:32.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:32.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:32.275 "hdgst": false, 00:10:32.275 "ddgst": false 00:10:32.275 }, 00:10:32.275 "method": "bdev_nvme_attach_controller" 00:10:32.275 }' 00:10:32.275 [2024-07-13 07:56:37.956393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.275 [2024-07-13 07:56:37.956419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.275 [2024-07-13 07:56:37.964390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.275 [2024-07-13 07:56:37.964416] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.275 [2024-07-13 07:56:37.972394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.275 [2024-07-13 07:56:37.972419] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.275 [2024-07-13 07:56:37.984396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.275 [2024-07-13 07:56:37.984423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.275 [2024-07-13 07:56:37.993276] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:32.275 [2024-07-13 07:56:37.993379] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72743 ] 00:10:32.275 [2024-07-13 07:56:37.996396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.275 [2024-07-13 07:56:37.996420] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.275 [2024-07-13 07:56:38.008398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.275 [2024-07-13 07:56:38.008421] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.275 [2024-07-13 07:56:38.020401] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.275 [2024-07-13 07:56:38.020441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.275 [2024-07-13 07:56:38.032404] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.275 [2024-07-13 07:56:38.032444] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.275 [2024-07-13 07:56:38.044412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.275 [2024-07-13 07:56:38.044455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.275 [2024-07-13 07:56:38.056410] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.275 [2024-07-13 07:56:38.056448] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.275 [2024-07-13 07:56:38.068412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.275 [2024-07-13 07:56:38.068435] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.275 [2024-07-13 07:56:38.080413] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.275 [2024-07-13 07:56:38.080453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.534 [2024-07-13 07:56:38.092422] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.534 [2024-07-13 07:56:38.092478] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.534 [2024-07-13 07:56:38.104417] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.534 [2024-07-13 07:56:38.104456] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.534 [2024-07-13 07:56:38.116430] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.534 [2024-07-13 07:56:38.116470] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.534 [2024-07-13 07:56:38.128432] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.534 [2024-07-13 07:56:38.128455] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.534 [2024-07-13 07:56:38.129995] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:32.534 [2024-07-13 07:56:38.136456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.534 [2024-07-13 07:56:38.136505] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.534 [2024-07-13 07:56:38.144451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.534 [2024-07-13 07:56:38.144498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.534 [2024-07-13 07:56:38.156475] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.534 [2024-07-13 07:56:38.156534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.534 [2024-07-13 07:56:38.163672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.534 [2024-07-13 07:56:38.164466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.534 [2024-07-13 07:56:38.164492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.534 [2024-07-13 07:56:38.172455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.534 [2024-07-13 07:56:38.172494] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.534 [2024-07-13 07:56:38.180481] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.534 [2024-07-13 07:56:38.180533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.534 [2024-07-13 07:56:38.188482] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.534 [2024-07-13 07:56:38.188533] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.534 [2024-07-13 07:56:38.196489] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.534 [2024-07-13 07:56:38.196540] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.534 [2024-07-13 07:56:38.204480] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.534 [2024-07-13 07:56:38.204523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.534 [2024-07-13 07:56:38.212490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.534 [2024-07-13 07:56:38.212539] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.534 [2024-07-13 07:56:38.220488] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.535 [2024-07-13 07:56:38.220536] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.535 [2024-07-13 07:56:38.228490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.535 [2024-07-13 07:56:38.228534] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.535 [2024-07-13 07:56:38.236500] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.535 [2024-07-13 07:56:38.236543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.535 [2024-07-13 07:56:38.244509] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.535 [2024-07-13 07:56:38.244553] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.535 [2024-07-13 07:56:38.252533] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.535 [2024-07-13 07:56:38.252576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.535 [2024-07-13 07:56:38.260520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.535 [2024-07-13 07:56:38.260564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.535 [2024-07-13 07:56:38.268527] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.535 [2024-07-13 07:56:38.268570] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.535 [2024-07-13 07:56:38.276534] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.535 [2024-07-13 07:56:38.276574] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.535 [2024-07-13 07:56:38.284545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.535 [2024-07-13 07:56:38.284591] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.535 Running I/O for 5 seconds... 00:10:32.535 [2024-07-13 07:56:38.296555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.535 [2024-07-13 07:56:38.296598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.535 [2024-07-13 07:56:38.310007] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.535 [2024-07-13 07:56:38.310058] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.535 [2024-07-13 07:56:38.321974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.535 [2024-07-13 07:56:38.322025] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.535 [2024-07-13 07:56:38.331055] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.535 [2024-07-13 07:56:38.331103] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.535 [2024-07-13 07:56:38.341384] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.535 [2024-07-13 07:56:38.341432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.351607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.351658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.361851] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.361922] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.376902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.376949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.388376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.388423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.396947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.396994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.409085] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.409133] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.419031] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.419064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.429143] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.429189] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.438957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.439008] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.448703] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.448750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.458710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.458759] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.468638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.468686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.478616] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.478663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.488721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.488768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.498734] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.498788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.508494] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.508541] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.518444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.518490] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.527895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.527942] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.537660] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.537707] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.547184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.547231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.557163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.557212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.567327] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.567375] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.581499] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.581531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.794 [2024-07-13 07:56:38.591205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.794 [2024-07-13 07:56:38.591237] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:32.795 [2024-07-13 07:56:38.606130] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:32.795 [2024-07-13 07:56:38.606166] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.053 [2024-07-13 07:56:38.622571] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.053 [2024-07-13 07:56:38.622603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.053 [2024-07-13 07:56:38.632290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.053 [2024-07-13 07:56:38.632321] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.053 [2024-07-13 07:56:38.642965] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.642998] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.660552] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.660601] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.677310] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.677359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.686648] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.686696] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.700065] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.700113] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.709087] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.709135] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.719315] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.719362] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.728881] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.728928] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.738808] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.738867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.748349] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.748395] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.758372] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.758418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.768057] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.768106] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.777834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.777888] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.787864] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.787912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.797694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.797741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.807518] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.807564] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.817086] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.817134] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.826805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.826861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.836669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.836715] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.846492] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.846523] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.856013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.856060] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.054 [2024-07-13 07:56:38.865994] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.054 [2024-07-13 07:56:38.866028] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:38.880519] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:38.880566] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:38.890020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:38.890055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:38.903188] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:38.903239] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:38.913636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:38.913684] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:38.928017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:38.928052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:38.937403] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:38.937453] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:38.953779] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:38.953837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:38.963195] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:38.963242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:38.975763] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:38.975837] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:38.985406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:38.985454] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:38.995437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:38.995485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:39.005229] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:39.005276] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:39.015132] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:39.015196] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:39.024591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:39.024637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:39.034298] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:39.034345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:39.044286] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:39.044333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:39.054356] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:39.054402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:39.063819] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:39.063865] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:39.073529] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:39.073576] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:39.083723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:39.083770] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:39.093556] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:39.093603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:39.103689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:39.103736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:39.113456] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:39.113503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.313 [2024-07-13 07:56:39.124171] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.313 [2024-07-13 07:56:39.124219] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.140923] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.140955] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.149957] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.150006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.164316] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.164364] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.173623] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.173670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.183645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.183693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.193694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.193741] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.203449] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.203496] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.213077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.213124] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.222739] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.222811] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.233107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.233169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.244336] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.244383] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.253420] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.253467] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.263711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.263758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.273570] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.273617] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.283588] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.283636] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.293611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.293658] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.303949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.303996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.313626] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.313674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.324872] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.324919] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.333740] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.333813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.346598] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.346646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.356123] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.356186] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.370165] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.370215] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.571 [2024-07-13 07:56:39.378919] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.571 [2024-07-13 07:56:39.378966] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.393604] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.393652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.411603] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.411650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.421252] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.421299] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.431440] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.431487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.441243] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.441291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.450945] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.450994] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.460858] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.460905] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.471109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.471170] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.481180] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.481257] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.491640] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.491688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.501250] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.501297] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.511008] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.511055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.520424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.520471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.534827] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.534886] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.543469] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.543516] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.555567] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.555615] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.570976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.571007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.579553] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.579600] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.596308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.596356] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.614301] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.614348] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:33.828 [2024-07-13 07:56:39.630273] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:33.828 [2024-07-13 07:56:39.630334] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.646980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.647026] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.663916] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.663949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.679296] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.679328] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.688578] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.688609] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.704637] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.704670] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.713972] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.714007] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.726719] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.726750] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.743096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.743130] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.760421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.760469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.770383] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.770430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.780371] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.780418] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.790147] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.790212] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.800208] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.800255] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.810120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.810165] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.820005] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.820052] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.830017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.830066] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.839685] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.839732] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.849484] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.849531] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.859340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.859387] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.869245] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.869291] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.879098] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.879147] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.888920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.888968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.086 [2024-07-13 07:56:39.898912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.086 [2024-07-13 07:56:39.898958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:39.909520] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:39.909569] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:39.921512] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:39.921559] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:39.931120] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:39.931151] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:39.944380] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:39.944428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:39.955283] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:39.955330] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:39.967549] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:39.967598] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:39.977438] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:39.977487] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:39.988783] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:39.988859] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:39.999609] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:39.999657] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:40.016796] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:40.016843] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:40.033591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:40.033640] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:40.043163] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:40.043211] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:40.056750] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:40.056824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:40.065490] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:40.065537] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:40.079461] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:40.079508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:40.087664] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:40.087710] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:40.099773] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:40.099858] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:40.110577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:40.110624] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:40.118920] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:40.118967] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:40.130826] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:40.130882] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:40.140749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:40.140824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.344 [2024-07-13 07:56:40.150798] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.344 [2024-07-13 07:56:40.150854] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.161360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.161409] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.173368] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.173415] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.182502] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.182548] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.194744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.194819] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.206242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.206289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.215155] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.215231] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.225942] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.225992] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.237607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.237655] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.246364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.246410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.256638] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.256686] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.266602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.266650] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.276797] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.276855] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.290635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.290681] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.298940] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.298987] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.310694] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.310740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.320212] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.320258] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.329762] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.329835] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.339542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.339588] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.349602] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.349649] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.359714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.359760] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.369742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.369813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.380558] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.380605] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.397495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.397542] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.602 [2024-07-13 07:56:40.407114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.602 [2024-07-13 07:56:40.407177] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.422109] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.422160] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.431444] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.431492] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.443742] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.443814] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.453288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.453335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.463096] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.463143] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.473271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.473317] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.482939] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.482985] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.493054] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.493102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.503181] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.503227] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.513312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.513359] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.522902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.522949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.532467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.532515] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.542412] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.542474] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.552463] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.552510] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.562313] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.562360] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.572105] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.572153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.582074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.582109] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.591890] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.591938] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.601625] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.601672] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.611547] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.611593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.621350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.621397] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.631689] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.631736] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.644359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.644408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.655898] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.655946] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:34.861 [2024-07-13 07:56:40.664641] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:34.861 [2024-07-13 07:56:40.664688] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.677262] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.677310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.687106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.687152] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.697153] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.697200] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.707275] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.707322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.717288] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.717335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.727656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.727703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.739966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.740015] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.749577] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.749625] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.760985] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.761019] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.771385] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.771432] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.781840] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.781912] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.792318] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.792365] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.802836] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.802895] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.812606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.812654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.822980] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.823014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.833297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.833345] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.843287] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.843335] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.853416] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.853465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.867791] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.867849] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.876364] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.876411] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.886546] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.886593] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.896272] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.896319] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.906204] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.906267] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.920457] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.920504] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.120 [2024-07-13 07:56:40.929228] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.120 [2024-07-13 07:56:40.929275] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:40.942314] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:40.942361] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:40.959907] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:40.959957] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:40.974620] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:40.974682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:40.983924] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:40.983958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:40.995542] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:40.995590] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.005555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.005602] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.015537] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.015585] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.025151] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.025198] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.035030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.035080] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.045947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.045981] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.058424] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.058471] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.068184] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.068249] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.078968] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.079003] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.089043] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.089091] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.099758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.099834] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.112146] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.112194] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.121433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.121481] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.134076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.134111] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.143341] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.143404] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.155017] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.155064] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.165239] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.165286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.175693] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.175740] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.379 [2024-07-13 07:56:41.187606] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.379 [2024-07-13 07:56:41.187652] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.637 [2024-07-13 07:56:41.197465] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.637 [2024-07-13 07:56:41.197513] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.637 [2024-07-13 07:56:41.209330] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.637 [2024-07-13 07:56:41.209377] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.637 [2024-07-13 07:56:41.220946] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.637 [2024-07-13 07:56:41.220993] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.637 [2024-07-13 07:56:41.236987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.637 [2024-07-13 07:56:41.237034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.637 [2024-07-13 07:56:41.246636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.637 [2024-07-13 07:56:41.246683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.260723] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.260801] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.270326] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.270373] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.285856] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.285929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.294455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.294502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.306711] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.306758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.317645] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.317693] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.325590] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.325637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.338418] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.338465] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.348312] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.348358] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.357746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.357821] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.367669] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.367717] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.377656] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.377703] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.387437] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.387485] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.401741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.401813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.410675] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.410721] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.421027] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.421075] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.430877] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.430933] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.440614] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.440660] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.638 [2024-07-13 07:56:41.450555] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.638 [2024-07-13 07:56:41.450603] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.461215] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.461263] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.472996] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.473044] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.482128] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.482176] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.492725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.492799] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.504777] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.504867] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.513622] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.513669] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.526627] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.526674] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.537823] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.537870] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.546377] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.546424] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.558114] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.558163] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.567635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.567682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.579267] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.579315] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.588010] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.588057] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.599915] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.599962] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.611695] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.611742] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.620462] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.620508] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.631002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.631049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.641038] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.641102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.650870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.650915] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.660389] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.660436] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.670376] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.670423] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.679725] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.679798] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.689561] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.689608] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.699470] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.699517] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:35.896 [2024-07-13 07:56:41.709359] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:35.896 [2024-07-13 07:56:41.709408] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.719741] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.719813] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.729504] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.729551] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.740259] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.740306] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.748731] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.748788] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.761075] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.761122] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.771115] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.771161] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.780973] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.781022] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.790992] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.791040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.800242] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.800289] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.810107] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.810156] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.819455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.819502] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.834138] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.834173] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.843834] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.843880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.855360] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.855407] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.865572] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.865620] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.876205] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.876256] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.889020] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.889055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.898710] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.898758] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.908981] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.909029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.921585] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.921634] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.930394] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.930439] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.945764] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.945838] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.154 [2024-07-13 07:56:41.954608] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.154 [2024-07-13 07:56:41.954654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:41.970001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:41.970035] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:41.980495] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:41.980543] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:41.994744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:41.994809] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.004683] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.004731] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.015421] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.015469] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.027508] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.027555] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.038787] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.038861] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.046902] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.046949] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.061651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.061698] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.077467] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.077514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.096152] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.096201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.106271] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.106318] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.120895] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.120930] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.131106] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.131141] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.146049] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.146099] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.161910] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.161945] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.173154] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.173201] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.181333] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.181381] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.193806] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.193852] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.412 [2024-07-13 07:56:42.203544] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.412 [2024-07-13 07:56:42.203592] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.413 [2024-07-13 07:56:42.213290] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.413 [2024-07-13 07:56:42.213338] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.413 [2024-07-13 07:56:42.223393] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.413 [2024-07-13 07:56:42.223441] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.237402] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.237451] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.246391] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.246437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.256721] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.256768] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.266564] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.266611] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.277749] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.277824] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.286636] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.286683] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.302269] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.302333] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.311451] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.311498] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.321704] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.321751] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.333001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.333048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.341707] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.341754] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.353599] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.353646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.364906] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.364953] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.373635] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.373682] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.385812] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.385872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.395343] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.395390] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.409597] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.409646] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.418077] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.418126] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.432949] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.432997] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.442100] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.442150] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.454066] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.454102] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.670 [2024-07-13 07:56:42.469804] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.670 [2024-07-13 07:56:42.469910] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.488365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.928 [2024-07-13 07:56:42.488414] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.497888] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.928 [2024-07-13 07:56:42.497952] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.507852] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.928 [2024-07-13 07:56:42.507901] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.517611] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.928 [2024-07-13 07:56:42.517659] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.531810] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.928 [2024-07-13 07:56:42.531869] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.540477] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.928 [2024-07-13 07:56:42.540525] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.552340] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.928 [2024-07-13 07:56:42.552388] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.561651] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.928 [2024-07-13 07:56:42.561697] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.571337] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.928 [2024-07-13 07:56:42.571384] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.581246] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.928 [2024-07-13 07:56:42.581293] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.591076] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.928 [2024-07-13 07:56:42.591125] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.600882] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.928 [2024-07-13 07:56:42.600929] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.610966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.928 [2024-07-13 07:56:42.611009] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.620870] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.928 [2024-07-13 07:56:42.620917] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.630927] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.928 [2024-07-13 07:56:42.630974] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.641331] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.928 [2024-07-13 07:56:42.641379] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.928 [2024-07-13 07:56:42.651381] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.929 [2024-07-13 07:56:42.651428] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.929 [2024-07-13 07:56:42.661358] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.929 [2024-07-13 07:56:42.661405] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.929 [2024-07-13 07:56:42.671433] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.929 [2024-07-13 07:56:42.671480] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.929 [2024-07-13 07:56:42.681540] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.929 [2024-07-13 07:56:42.681586] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.929 [2024-07-13 07:56:42.691392] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.929 [2024-07-13 07:56:42.691438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.929 [2024-07-13 07:56:42.700931] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.929 [2024-07-13 07:56:42.700978] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.929 [2024-07-13 07:56:42.710746] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.929 [2024-07-13 07:56:42.710818] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.929 [2024-07-13 07:56:42.720466] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.929 [2024-07-13 07:56:42.720514] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.929 [2024-07-13 07:56:42.730591] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.929 [2024-07-13 07:56:42.730637] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:36.929 [2024-07-13 07:56:42.740674] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:36.929 [2024-07-13 07:56:42.740722] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.751274] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.751322] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.763194] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.763242] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.772607] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.772654] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.782815] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.782872] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.792921] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.792968] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.803297] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.803344] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.813350] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.813396] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.823308] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.823355] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.833121] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.833182] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.843122] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.843169] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.852912] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.852958] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.862758] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.862830] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.872398] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.872445] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.891718] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.891766] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.901241] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.901288] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.911091] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.911153] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.921354] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.921402] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.932361] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.932410] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.943238] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.943286] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.955947] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.955996] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.974497] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.974547] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.988678] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.988726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.186 [2024-07-13 07:56:42.999378] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.186 [2024-07-13 07:56:42.999430] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.014074] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.014110] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.023680] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.023726] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.037966] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.038004] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.047013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.047062] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.057365] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.057412] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.067615] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.067663] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.077575] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.077623] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.087390] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.087437] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.097264] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.097310] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.107059] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.107107] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.116982] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.117029] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.126958] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.127005] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.136805] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.136862] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.146922] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.146969] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.157455] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.157503] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.169933] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.169983] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.181097] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.181144] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.189714] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.189761] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.201716] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.201764] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.216744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.216840] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.226396] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.226443] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.237545] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.237594] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.247976] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.248011] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.444 [2024-07-13 07:56:43.258667] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.444 [2024-07-13 07:56:43.258718] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.270744] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.270794] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.286406] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.286438] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.294838] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.294880] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 00:10:37.701 Latency(us) 00:10:37.701 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:37.701 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:37.701 Nvme1n1 : 5.01 12618.84 98.58 0.00 0.00 10132.85 4081.11 24903.68 00:10:37.701 =================================================================================================================== 00:10:37.701 Total : 12618.84 98.58 0.00 0.00 10132.85 4081.11 24903.68 00:10:37.701 [2024-07-13 07:56:43.305974] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.306006] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.313975] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.314010] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.326002] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.326049] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.334015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.334056] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.342014] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.342055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.350015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.350055] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.358006] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.358048] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.366001] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.366039] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.373998] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.374031] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.381987] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.382014] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.390033] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.390072] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.398018] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.398050] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.406013] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.406040] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.418030] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.418068] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.426009] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.426034] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 [2024-07-13 07:56:43.434015] subsystem.c:1793:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:37.701 [2024-07-13 07:56:43.434041] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:37.701 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (72743) - No such process 00:10:37.701 07:56:43 -- target/zcopy.sh@49 -- # wait 72743 00:10:37.701 07:56:43 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:37.701 07:56:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:37.702 07:56:43 -- common/autotest_common.sh@10 -- # set +x 00:10:37.702 07:56:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:37.702 07:56:43 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:37.702 07:56:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:37.702 07:56:43 -- common/autotest_common.sh@10 -- # set +x 00:10:37.702 delay0 00:10:37.702 07:56:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:37.702 07:56:43 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:37.702 07:56:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:37.702 07:56:43 -- common/autotest_common.sh@10 -- # set +x 00:10:37.702 07:56:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:37.702 07:56:43 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:37.959 [2024-07-13 07:56:43.616511] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:44.523 Initializing NVMe Controllers 00:10:44.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:44.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:44.523 Initialization complete. Launching workers. 00:10:44.523 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 806 00:10:44.523 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1093, failed to submit 33 00:10:44.523 success 970, unsuccess 123, failed 0 00:10:44.523 07:56:49 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:44.523 07:56:49 -- target/zcopy.sh@60 -- # nvmftestfini 00:10:44.523 07:56:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:44.523 07:56:49 -- nvmf/common.sh@116 -- # sync 00:10:44.523 07:56:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:44.523 07:56:49 -- nvmf/common.sh@119 -- # set +e 00:10:44.523 07:56:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:44.523 07:56:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:44.523 rmmod nvme_tcp 00:10:44.523 rmmod nvme_fabrics 00:10:44.523 rmmod nvme_keyring 00:10:44.523 07:56:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:44.523 07:56:49 -- nvmf/common.sh@123 -- # set -e 00:10:44.523 07:56:49 -- nvmf/common.sh@124 -- # return 0 00:10:44.523 07:56:49 -- nvmf/common.sh@477 -- # '[' -n 72660 ']' 00:10:44.523 07:56:49 -- nvmf/common.sh@478 -- # killprocess 72660 00:10:44.523 07:56:49 -- common/autotest_common.sh@926 -- # '[' -z 72660 ']' 00:10:44.523 07:56:49 -- common/autotest_common.sh@930 -- # kill -0 72660 00:10:44.523 07:56:49 -- common/autotest_common.sh@931 -- # uname 00:10:44.523 07:56:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:44.523 07:56:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72660 00:10:44.523 killing process with pid 72660 00:10:44.523 07:56:49 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:10:44.523 07:56:49 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:10:44.523 07:56:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72660' 00:10:44.523 07:56:49 -- common/autotest_common.sh@945 -- # kill 72660 00:10:44.523 07:56:49 -- common/autotest_common.sh@950 -- # wait 72660 00:10:44.523 07:56:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:44.523 07:56:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:44.523 07:56:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:44.523 07:56:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:44.523 07:56:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:44.523 07:56:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.523 07:56:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.523 07:56:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.523 07:56:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:44.523 ************************************ 00:10:44.523 END TEST nvmf_zcopy 00:10:44.523 ************************************ 00:10:44.523 00:10:44.523 real 0m24.112s 00:10:44.523 user 0m39.759s 00:10:44.523 sys 0m6.410s 00:10:44.523 07:56:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:44.523 07:56:50 -- common/autotest_common.sh@10 -- # set +x 00:10:44.523 07:56:50 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:44.523 07:56:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:44.523 07:56:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:44.523 07:56:50 -- common/autotest_common.sh@10 -- # set +x 00:10:44.523 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:10:44.523 ************************************ 00:10:44.524 START TEST nvmf_nmic 00:10:44.524 ************************************ 00:10:44.524 07:56:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:44.524 * Looking for test storage... 00:10:44.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:44.524 07:56:50 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:44.524 07:56:50 -- nvmf/common.sh@7 -- # uname -s 00:10:44.524 07:56:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.524 07:56:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.524 07:56:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.524 07:56:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.524 07:56:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.524 07:56:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.524 07:56:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.524 07:56:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.524 07:56:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.524 07:56:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.524 07:56:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:10:44.524 07:56:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:10:44.524 07:56:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.524 07:56:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.524 07:56:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:44.524 07:56:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:44.524 07:56:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.524 07:56:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.524 07:56:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.524 07:56:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.524 07:56:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.524 07:56:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.524 07:56:50 -- paths/export.sh@5 -- # export PATH 00:10:44.524 07:56:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.524 07:56:50 -- nvmf/common.sh@46 -- # : 0 00:10:44.524 07:56:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:44.524 07:56:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:44.524 07:56:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:44.524 07:56:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.524 07:56:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.524 07:56:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:44.524 07:56:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:44.524 07:56:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:44.524 07:56:50 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:44.524 07:56:50 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:44.524 07:56:50 -- target/nmic.sh@14 -- # nvmftestinit 00:10:44.524 07:56:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:44.524 07:56:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.524 07:56:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:44.524 07:56:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:44.524 07:56:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:44.524 07:56:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.524 07:56:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.524 07:56:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.524 07:56:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:44.524 07:56:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:44.524 07:56:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:44.524 07:56:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:44.524 07:56:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:44.524 07:56:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:44.524 07:56:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:44.524 07:56:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.524 07:56:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:44.524 07:56:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:44.524 07:56:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:44.524 07:56:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:44.524 07:56:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:44.524 07:56:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.524 07:56:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:44.524 07:56:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:44.524 07:56:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:44.524 07:56:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:44.524 07:56:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:44.524 07:56:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:44.524 Cannot find device "nvmf_tgt_br" 00:10:44.524 07:56:50 -- nvmf/common.sh@154 -- # true 00:10:44.524 07:56:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:44.524 Cannot find device "nvmf_tgt_br2" 00:10:44.524 07:56:50 -- nvmf/common.sh@155 -- # true 00:10:44.524 07:56:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:44.524 07:56:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:44.524 Cannot find device "nvmf_tgt_br" 00:10:44.524 07:56:50 -- nvmf/common.sh@157 -- # true 00:10:44.524 07:56:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:44.524 Cannot find device "nvmf_tgt_br2" 00:10:44.524 07:56:50 -- nvmf/common.sh@158 -- # true 00:10:44.524 07:56:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:44.802 07:56:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:44.802 07:56:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:44.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:44.802 07:56:50 -- nvmf/common.sh@161 -- # true 00:10:44.802 07:56:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:44.802 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:44.802 07:56:50 -- nvmf/common.sh@162 -- # true 00:10:44.802 07:56:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:44.802 07:56:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:44.802 07:56:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:44.802 07:56:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:44.802 07:56:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:44.802 07:56:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:44.802 07:56:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:44.802 07:56:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:44.802 07:56:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:44.802 07:56:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:44.802 07:56:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:44.802 07:56:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:44.802 07:56:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:44.802 07:56:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:44.802 07:56:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:44.802 07:56:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:44.802 07:56:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:44.802 07:56:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:44.802 07:56:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:44.802 07:56:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:44.802 07:56:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:44.802 07:56:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:44.802 07:56:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:44.802 07:56:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:44.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:10:44.802 00:10:44.802 --- 10.0.0.2 ping statistics --- 00:10:44.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.802 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:10:44.802 07:56:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:44.802 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:44.802 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:10:44.802 00:10:44.802 --- 10.0.0.3 ping statistics --- 00:10:44.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.802 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:10:44.802 07:56:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:44.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:10:44.802 00:10:44.802 --- 10.0.0.1 ping statistics --- 00:10:44.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.802 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:10:44.802 07:56:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.802 07:56:50 -- nvmf/common.sh@421 -- # return 0 00:10:44.802 07:56:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:44.802 07:56:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.802 07:56:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:44.802 07:56:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:44.802 07:56:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.802 07:56:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:44.802 07:56:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:44.802 07:56:50 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:44.802 07:56:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:44.802 07:56:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:44.802 07:56:50 -- common/autotest_common.sh@10 -- # set +x 00:10:44.802 07:56:50 -- nvmf/common.sh@469 -- # nvmfpid=72987 00:10:44.802 07:56:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:44.802 07:56:50 -- nvmf/common.sh@470 -- # waitforlisten 72987 00:10:44.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.802 07:56:50 -- common/autotest_common.sh@819 -- # '[' -z 72987 ']' 00:10:44.802 07:56:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.802 07:56:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:44.802 07:56:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.802 07:56:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:44.802 07:56:50 -- common/autotest_common.sh@10 -- # set +x 00:10:45.061 [2024-07-13 07:56:50.663680] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:45.061 [2024-07-13 07:56:50.663996] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.061 [2024-07-13 07:56:50.802515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.061 [2024-07-13 07:56:50.838263] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:45.061 [2024-07-13 07:56:50.838591] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.061 [2024-07-13 07:56:50.838646] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.061 [2024-07-13 07:56:50.838809] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.061 [2024-07-13 07:56:50.839538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.061 [2024-07-13 07:56:50.839739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.061 [2024-07-13 07:56:50.839635] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.061 [2024-07-13 07:56:50.839739] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.997 07:56:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:45.997 07:56:51 -- common/autotest_common.sh@852 -- # return 0 00:10:45.997 07:56:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:45.997 07:56:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:45.997 07:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:45.997 07:56:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.997 07:56:51 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:45.997 07:56:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:45.997 07:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:45.997 [2024-07-13 07:56:51.646375] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.997 07:56:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:45.997 07:56:51 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:45.997 07:56:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:45.997 07:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:45.997 Malloc0 00:10:45.997 07:56:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:45.997 07:56:51 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:45.997 07:56:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:45.997 07:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:45.997 07:56:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:45.997 07:56:51 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:45.997 07:56:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:45.997 07:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:45.997 07:56:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:45.997 07:56:51 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.997 07:56:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:45.997 07:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:45.997 [2024-07-13 07:56:51.707845] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.997 test case1: single bdev can't be used in multiple subsystems 00:10:45.997 07:56:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:45.997 07:56:51 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:45.997 07:56:51 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:45.997 07:56:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:45.997 07:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:45.997 07:56:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:45.997 07:56:51 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:45.997 07:56:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:45.997 07:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:45.997 07:56:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:45.997 07:56:51 -- target/nmic.sh@28 -- # nmic_status=0 00:10:45.997 07:56:51 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:45.997 07:56:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:45.997 07:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:45.997 [2024-07-13 07:56:51.731664] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:45.997 [2024-07-13 07:56:51.731701] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:45.997 [2024-07-13 07:56:51.731714] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:45.997 request: 00:10:45.997 { 00:10:45.997 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:45.997 "namespace": { 00:10:45.997 "bdev_name": "Malloc0" 00:10:45.997 }, 00:10:45.997 "method": "nvmf_subsystem_add_ns", 00:10:45.997 "req_id": 1 00:10:45.997 } 00:10:45.997 Got JSON-RPC error response 00:10:45.997 response: 00:10:45.997 { 00:10:45.997 "code": -32602, 00:10:45.997 "message": "Invalid parameters" 00:10:45.997 } 00:10:45.997 Adding namespace failed - expected result. 00:10:45.997 test case2: host connect to nvmf target in multiple paths 00:10:45.997 07:56:51 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:10:45.997 07:56:51 -- target/nmic.sh@29 -- # nmic_status=1 00:10:45.997 07:56:51 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:45.997 07:56:51 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:45.997 07:56:51 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:45.997 07:56:51 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:45.997 07:56:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:10:45.997 07:56:51 -- common/autotest_common.sh@10 -- # set +x 00:10:45.997 [2024-07-13 07:56:51.743842] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:45.997 07:56:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:10:45.997 07:56:51 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:46.256 07:56:51 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:46.256 07:56:51 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:46.256 07:56:51 -- common/autotest_common.sh@1177 -- # local i=0 00:10:46.256 07:56:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:10:46.256 07:56:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:10:46.256 07:56:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:10:48.786 07:56:54 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:10:48.786 07:56:54 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.786 07:56:54 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:10:48.786 07:56:54 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:10:48.786 07:56:54 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.786 07:56:54 -- common/autotest_common.sh@1187 -- # return 0 00:10:48.786 07:56:54 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:48.786 [global] 00:10:48.786 thread=1 00:10:48.786 invalidate=1 00:10:48.786 rw=write 00:10:48.786 time_based=1 00:10:48.786 runtime=1 00:10:48.786 ioengine=libaio 00:10:48.786 direct=1 00:10:48.786 bs=4096 00:10:48.786 iodepth=1 00:10:48.786 norandommap=0 00:10:48.786 numjobs=1 00:10:48.786 00:10:48.786 verify_dump=1 00:10:48.786 verify_backlog=512 00:10:48.786 verify_state_save=0 00:10:48.786 do_verify=1 00:10:48.786 verify=crc32c-intel 00:10:48.786 [job0] 00:10:48.786 filename=/dev/nvme0n1 00:10:48.786 Could not set queue depth (nvme0n1) 00:10:48.786 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.786 fio-3.35 00:10:48.786 Starting 1 thread 00:10:49.723 00:10:49.723 job0: (groupid=0, jobs=1): err= 0: pid=73056: Sat Jul 13 07:56:55 2024 00:10:49.723 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:10:49.723 slat (nsec): min=11962, max=57757, avg=15243.24, stdev=3410.07 00:10:49.723 clat (usec): min=136, max=2795, avg=175.55, stdev=69.82 00:10:49.723 lat (usec): min=149, max=2817, avg=190.79, stdev=70.24 00:10:49.723 clat percentiles (usec): 00:10:49.723 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:10:49.723 | 30.00th=[ 163], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 176], 00:10:49.723 | 70.00th=[ 182], 80.00th=[ 186], 90.00th=[ 194], 95.00th=[ 200], 00:10:49.723 | 99.00th=[ 217], 99.50th=[ 269], 99.90th=[ 807], 99.95th=[ 2343], 00:10:49.723 | 99.99th=[ 2802] 00:10:49.723 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:49.723 slat (nsec): min=13971, max=74728, avg=22420.70, stdev=5466.87 00:10:49.723 clat (usec): min=83, max=199, avg=108.97, stdev=13.37 00:10:49.723 lat (usec): min=102, max=257, avg=131.39, stdev=15.80 00:10:49.723 clat percentiles (usec): 00:10:49.723 | 1.00th=[ 88], 5.00th=[ 91], 10.00th=[ 94], 20.00th=[ 98], 00:10:49.723 | 30.00th=[ 101], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 111], 00:10:49.723 | 70.00th=[ 114], 80.00th=[ 119], 90.00th=[ 127], 95.00th=[ 135], 00:10:49.723 | 99.00th=[ 149], 99.50th=[ 159], 99.90th=[ 182], 99.95th=[ 184], 00:10:49.723 | 99.99th=[ 200] 00:10:49.723 bw ( KiB/s): min=12263, max=12263, per=99.90%, avg=12263.00, stdev= 0.00, samples=1 00:10:49.723 iops : min= 3065, max= 3065, avg=3065.00, stdev= 0.00, samples=1 00:10:49.723 lat (usec) : 100=12.96%, 250=86.76%, 500=0.20%, 750=0.02%, 1000=0.02% 00:10:49.723 lat (msec) : 2=0.02%, 4=0.03% 00:10:49.723 cpu : usr=2.40%, sys=9.00%, ctx=6141, majf=0, minf=2 00:10:49.723 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.723 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.723 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.723 issued rwts: total=3069,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.723 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.723 00:10:49.723 Run status group 0 (all jobs): 00:10:49.723 READ: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:49.723 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:49.723 00:10:49.723 Disk stats (read/write): 00:10:49.723 nvme0n1: ios=2610/3025, merge=0/0, ticks=481/350, in_queue=831, util=90.98% 00:10:49.723 07:56:55 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:49.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:49.723 07:56:55 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:49.723 07:56:55 -- common/autotest_common.sh@1198 -- # local i=0 00:10:49.723 07:56:55 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:10:49.723 07:56:55 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.723 07:56:55 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:10:49.723 07:56:55 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.723 07:56:55 -- common/autotest_common.sh@1210 -- # return 0 00:10:49.723 07:56:55 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:49.723 07:56:55 -- target/nmic.sh@53 -- # nvmftestfini 00:10:49.723 07:56:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:10:49.723 07:56:55 -- nvmf/common.sh@116 -- # sync 00:10:49.723 07:56:55 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:10:49.723 07:56:55 -- nvmf/common.sh@119 -- # set +e 00:10:49.723 07:56:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:10:49.723 07:56:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:10:49.723 rmmod nvme_tcp 00:10:49.723 rmmod nvme_fabrics 00:10:49.723 rmmod nvme_keyring 00:10:49.723 07:56:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:10:49.723 07:56:55 -- nvmf/common.sh@123 -- # set -e 00:10:49.723 07:56:55 -- nvmf/common.sh@124 -- # return 0 00:10:49.723 07:56:55 -- nvmf/common.sh@477 -- # '[' -n 72987 ']' 00:10:49.723 07:56:55 -- nvmf/common.sh@478 -- # killprocess 72987 00:10:49.723 07:56:55 -- common/autotest_common.sh@926 -- # '[' -z 72987 ']' 00:10:49.723 07:56:55 -- common/autotest_common.sh@930 -- # kill -0 72987 00:10:49.723 07:56:55 -- common/autotest_common.sh@931 -- # uname 00:10:49.723 07:56:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:10:49.723 07:56:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 72987 00:10:49.723 killing process with pid 72987 00:10:49.723 07:56:55 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:10:49.723 07:56:55 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:10:49.723 07:56:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 72987' 00:10:49.723 07:56:55 -- common/autotest_common.sh@945 -- # kill 72987 00:10:49.723 07:56:55 -- common/autotest_common.sh@950 -- # wait 72987 00:10:49.981 07:56:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:10:49.981 07:56:55 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:10:49.981 07:56:55 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:10:49.981 07:56:55 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:49.981 07:56:55 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:10:49.981 07:56:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.981 07:56:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:49.981 07:56:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.981 07:56:55 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:10:49.981 00:10:49.981 real 0m5.599s 00:10:49.981 user 0m18.099s 00:10:49.981 sys 0m2.141s 00:10:49.981 ************************************ 00:10:49.981 END TEST nvmf_nmic 00:10:49.981 ************************************ 00:10:49.981 07:56:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:49.981 07:56:55 -- common/autotest_common.sh@10 -- # set +x 00:10:49.981 07:56:55 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:49.981 07:56:55 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:10:49.981 07:56:55 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:10:49.981 07:56:55 -- common/autotest_common.sh@10 -- # set +x 00:10:49.981 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:10:49.981 ************************************ 00:10:49.981 START TEST nvmf_fio_target 00:10:49.981 ************************************ 00:10:49.981 07:56:55 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:50.239 * Looking for test storage... 00:10:50.239 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:50.239 07:56:55 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:50.239 07:56:55 -- nvmf/common.sh@7 -- # uname -s 00:10:50.239 07:56:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.239 07:56:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.239 07:56:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.239 07:56:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.239 07:56:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.240 07:56:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.240 07:56:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.240 07:56:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.240 07:56:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.240 07:56:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.240 07:56:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:10:50.240 07:56:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:10:50.240 07:56:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.240 07:56:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.240 07:56:55 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:50.240 07:56:55 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:50.240 07:56:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.240 07:56:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.240 07:56:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.240 07:56:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.240 07:56:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.240 07:56:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.240 07:56:55 -- paths/export.sh@5 -- # export PATH 00:10:50.240 07:56:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.240 07:56:55 -- nvmf/common.sh@46 -- # : 0 00:10:50.240 07:56:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:10:50.240 07:56:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:10:50.240 07:56:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:10:50.240 07:56:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.240 07:56:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.240 07:56:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:10:50.240 07:56:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:10:50.240 07:56:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:10:50.240 07:56:55 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:50.240 07:56:55 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:50.240 07:56:55 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:50.240 07:56:55 -- target/fio.sh@16 -- # nvmftestinit 00:10:50.240 07:56:55 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:10:50.240 07:56:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.240 07:56:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:10:50.240 07:56:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:10:50.240 07:56:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:10:50.240 07:56:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.240 07:56:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:50.240 07:56:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.240 07:56:55 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:10:50.240 07:56:55 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:10:50.240 07:56:55 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:10:50.240 07:56:55 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:10:50.240 07:56:55 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:10:50.240 07:56:55 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:10:50.240 07:56:55 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:50.240 07:56:55 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:50.240 07:56:55 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:10:50.240 07:56:55 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:10:50.240 07:56:55 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:50.240 07:56:55 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:50.240 07:56:55 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:50.240 07:56:55 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:50.240 07:56:55 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:50.240 07:56:55 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:50.240 07:56:55 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:50.240 07:56:55 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:50.240 07:56:55 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:10:50.240 07:56:55 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:10:50.240 Cannot find device "nvmf_tgt_br" 00:10:50.240 07:56:55 -- nvmf/common.sh@154 -- # true 00:10:50.240 07:56:55 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:10:50.240 Cannot find device "nvmf_tgt_br2" 00:10:50.240 07:56:55 -- nvmf/common.sh@155 -- # true 00:10:50.240 07:56:55 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:10:50.240 07:56:55 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:10:50.240 Cannot find device "nvmf_tgt_br" 00:10:50.240 07:56:55 -- nvmf/common.sh@157 -- # true 00:10:50.240 07:56:55 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:10:50.240 Cannot find device "nvmf_tgt_br2" 00:10:50.240 07:56:55 -- nvmf/common.sh@158 -- # true 00:10:50.240 07:56:55 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:10:50.240 07:56:55 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:10:50.240 07:56:56 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:50.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:50.240 07:56:56 -- nvmf/common.sh@161 -- # true 00:10:50.240 07:56:56 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:50.240 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:50.240 07:56:56 -- nvmf/common.sh@162 -- # true 00:10:50.240 07:56:56 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:10:50.240 07:56:56 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:50.240 07:56:56 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:50.240 07:56:56 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:50.240 07:56:56 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:50.498 07:56:56 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:50.498 07:56:56 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:50.498 07:56:56 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:10:50.498 07:56:56 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:10:50.498 07:56:56 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:10:50.498 07:56:56 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:10:50.498 07:56:56 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:10:50.498 07:56:56 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:10:50.498 07:56:56 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:50.498 07:56:56 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:50.498 07:56:56 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:50.498 07:56:56 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:10:50.498 07:56:56 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:10:50.498 07:56:56 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:10:50.498 07:56:56 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:50.498 07:56:56 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:50.498 07:56:56 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:50.498 07:56:56 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:50.498 07:56:56 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:10:50.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:50.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:10:50.498 00:10:50.498 --- 10.0.0.2 ping statistics --- 00:10:50.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.498 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:10:50.498 07:56:56 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:10:50.498 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:50.498 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:10:50.498 00:10:50.498 --- 10.0.0.3 ping statistics --- 00:10:50.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.498 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:10:50.498 07:56:56 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:50.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:50.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:10:50.498 00:10:50.498 --- 10.0.0.1 ping statistics --- 00:10:50.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:50.498 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:10:50.498 07:56:56 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:50.498 07:56:56 -- nvmf/common.sh@421 -- # return 0 00:10:50.498 07:56:56 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:10:50.498 07:56:56 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:50.498 07:56:56 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:10:50.498 07:56:56 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:10:50.498 07:56:56 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:50.498 07:56:56 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:10:50.498 07:56:56 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:10:50.498 07:56:56 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:50.498 07:56:56 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:10:50.498 07:56:56 -- common/autotest_common.sh@712 -- # xtrace_disable 00:10:50.498 07:56:56 -- common/autotest_common.sh@10 -- # set +x 00:10:50.498 07:56:56 -- nvmf/common.sh@469 -- # nvmfpid=73222 00:10:50.498 07:56:56 -- nvmf/common.sh@470 -- # waitforlisten 73222 00:10:50.498 07:56:56 -- common/autotest_common.sh@819 -- # '[' -z 73222 ']' 00:10:50.498 07:56:56 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:50.498 07:56:56 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.498 07:56:56 -- common/autotest_common.sh@824 -- # local max_retries=100 00:10:50.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.498 07:56:56 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.498 07:56:56 -- common/autotest_common.sh@828 -- # xtrace_disable 00:10:50.498 07:56:56 -- common/autotest_common.sh@10 -- # set +x 00:10:50.498 [2024-07-13 07:56:56.295191] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:10:50.499 [2024-07-13 07:56:56.295289] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:50.756 [2024-07-13 07:56:56.434402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.756 [2024-07-13 07:56:56.470095] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:10:50.756 [2024-07-13 07:56:56.470449] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.756 [2024-07-13 07:56:56.470593] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.756 [2024-07-13 07:56:56.470733] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.756 [2024-07-13 07:56:56.471031] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.756 [2024-07-13 07:56:56.471495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.756 [2024-07-13 07:56:56.471613] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.756 [2024-07-13 07:56:56.471667] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.689 07:56:57 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:10:51.689 07:56:57 -- common/autotest_common.sh@852 -- # return 0 00:10:51.689 07:56:57 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:10:51.689 07:56:57 -- common/autotest_common.sh@718 -- # xtrace_disable 00:10:51.689 07:56:57 -- common/autotest_common.sh@10 -- # set +x 00:10:51.689 07:56:57 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:51.689 07:56:57 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:51.947 [2024-07-13 07:56:57.519487] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:51.947 07:56:57 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.204 07:56:57 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:52.204 07:56:57 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.462 07:56:58 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:52.462 07:56:58 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.720 07:56:58 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:52.720 07:56:58 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:52.978 07:56:58 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:52.978 07:56:58 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:53.235 07:56:58 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.494 07:56:59 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:53.494 07:56:59 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:53.750 07:56:59 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:53.750 07:56:59 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:54.007 07:56:59 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:54.007 07:56:59 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:54.007 07:56:59 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:54.263 07:57:00 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:54.263 07:57:00 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:54.521 07:57:00 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:54.521 07:57:00 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:54.778 07:57:00 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:55.035 [2024-07-13 07:57:00.705738] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.035 07:57:00 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:55.292 07:57:00 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:55.550 07:57:01 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.550 07:57:01 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:55.550 07:57:01 -- common/autotest_common.sh@1177 -- # local i=0 00:10:55.551 07:57:01 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.551 07:57:01 -- common/autotest_common.sh@1179 -- # [[ -n 4 ]] 00:10:55.551 07:57:01 -- common/autotest_common.sh@1180 -- # nvme_device_counter=4 00:10:55.551 07:57:01 -- common/autotest_common.sh@1184 -- # sleep 2 00:10:58.075 07:57:03 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:10:58.075 07:57:03 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:10:58.075 07:57:03 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:10:58.075 07:57:03 -- common/autotest_common.sh@1186 -- # nvme_devices=4 00:10:58.075 07:57:03 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:10:58.075 07:57:03 -- common/autotest_common.sh@1187 -- # return 0 00:10:58.075 07:57:03 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:58.075 [global] 00:10:58.075 thread=1 00:10:58.075 invalidate=1 00:10:58.075 rw=write 00:10:58.075 time_based=1 00:10:58.075 runtime=1 00:10:58.075 ioengine=libaio 00:10:58.075 direct=1 00:10:58.075 bs=4096 00:10:58.075 iodepth=1 00:10:58.075 norandommap=0 00:10:58.075 numjobs=1 00:10:58.075 00:10:58.075 verify_dump=1 00:10:58.075 verify_backlog=512 00:10:58.075 verify_state_save=0 00:10:58.075 do_verify=1 00:10:58.075 verify=crc32c-intel 00:10:58.075 [job0] 00:10:58.075 filename=/dev/nvme0n1 00:10:58.075 [job1] 00:10:58.075 filename=/dev/nvme0n2 00:10:58.075 [job2] 00:10:58.075 filename=/dev/nvme0n3 00:10:58.075 [job3] 00:10:58.075 filename=/dev/nvme0n4 00:10:58.075 Could not set queue depth (nvme0n1) 00:10:58.075 Could not set queue depth (nvme0n2) 00:10:58.075 Could not set queue depth (nvme0n3) 00:10:58.075 Could not set queue depth (nvme0n4) 00:10:58.075 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.075 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.075 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.075 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:58.075 fio-3.35 00:10:58.075 Starting 4 threads 00:10:59.009 00:10:59.009 job0: (groupid=0, jobs=1): err= 0: pid=73359: Sat Jul 13 07:57:04 2024 00:10:59.009 read: IOPS=3035, BW=11.9MiB/s (12.4MB/s)(11.9MiB/1001msec) 00:10:59.009 slat (nsec): min=12125, max=30251, avg=14197.96, stdev=1615.58 00:10:59.009 clat (usec): min=131, max=335, avg=164.41, stdev=12.01 00:10:59.009 lat (usec): min=144, max=351, avg=178.60, stdev=12.20 00:10:59.009 clat percentiles (usec): 00:10:59.009 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 155], 00:10:59.009 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 167], 00:10:59.009 | 70.00th=[ 172], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 184], 00:10:59.009 | 99.00th=[ 194], 99.50th=[ 196], 99.90th=[ 202], 99.95th=[ 215], 00:10:59.009 | 99.99th=[ 334] 00:10:59.009 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:59.009 slat (usec): min=14, max=112, avg=21.17, stdev= 4.08 00:10:59.009 clat (usec): min=92, max=328, avg=124.37, stdev=12.71 00:10:59.009 lat (usec): min=112, max=365, avg=145.54, stdev=14.06 00:10:59.009 clat percentiles (usec): 00:10:59.009 | 1.00th=[ 100], 5.00th=[ 108], 10.00th=[ 111], 20.00th=[ 116], 00:10:59.009 | 30.00th=[ 119], 40.00th=[ 122], 50.00th=[ 124], 60.00th=[ 127], 00:10:59.009 | 70.00th=[ 130], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 143], 00:10:59.009 | 99.00th=[ 153], 99.50th=[ 161], 99.90th=[ 239], 99.95th=[ 281], 00:10:59.009 | 99.99th=[ 330] 00:10:59.009 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:59.009 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:59.009 lat (usec) : 100=0.51%, 250=99.43%, 500=0.07% 00:10:59.009 cpu : usr=2.50%, sys=8.20%, ctx=6111, majf=0, minf=11 00:10:59.009 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.009 issued rwts: total=3039,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.009 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.009 job1: (groupid=0, jobs=1): err= 0: pid=73360: Sat Jul 13 07:57:04 2024 00:10:59.009 read: IOPS=1762, BW=7049KiB/s (7218kB/s)(7056KiB/1001msec) 00:10:59.009 slat (nsec): min=13049, max=63894, avg=19389.58, stdev=7308.94 00:10:59.009 clat (usec): min=162, max=560, avg=271.19, stdev=38.92 00:10:59.009 lat (usec): min=182, max=579, avg=290.58, stdev=41.18 00:10:59.009 clat percentiles (usec): 00:10:59.009 | 1.00th=[ 200], 5.00th=[ 229], 10.00th=[ 239], 20.00th=[ 247], 00:10:59.009 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:10:59.009 | 70.00th=[ 277], 80.00th=[ 285], 90.00th=[ 314], 95.00th=[ 343], 00:10:59.009 | 99.00th=[ 424], 99.50th=[ 469], 99.90th=[ 519], 99.95th=[ 562], 00:10:59.009 | 99.99th=[ 562] 00:10:59.009 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:59.009 slat (nsec): min=16823, max=87175, avg=24667.76, stdev=5504.41 00:10:59.009 clat (usec): min=101, max=404, avg=209.32, stdev=43.99 00:10:59.009 lat (usec): min=124, max=461, avg=233.99, stdev=46.05 00:10:59.009 clat percentiles (usec): 00:10:59.009 | 1.00th=[ 114], 5.00th=[ 129], 10.00th=[ 176], 20.00th=[ 188], 00:10:59.009 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:10:59.009 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 241], 95.00th=[ 310], 00:10:59.009 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 388], 99.95th=[ 396], 00:10:59.009 | 99.99th=[ 404] 00:10:59.009 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:59.009 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:59.009 lat (usec) : 250=60.34%, 500=39.56%, 750=0.10% 00:10:59.009 cpu : usr=1.30%, sys=7.10%, ctx=3812, majf=0, minf=12 00:10:59.009 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.009 issued rwts: total=1764,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.009 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.009 job2: (groupid=0, jobs=1): err= 0: pid=73361: Sat Jul 13 07:57:04 2024 00:10:59.009 read: IOPS=1805, BW=7221KiB/s (7394kB/s)(7228KiB/1001msec) 00:10:59.009 slat (nsec): min=12866, max=46176, avg=17000.19, stdev=3650.04 00:10:59.009 clat (usec): min=167, max=524, avg=278.46, stdev=46.18 00:10:59.009 lat (usec): min=181, max=541, avg=295.46, stdev=48.21 00:10:59.009 clat percentiles (usec): 00:10:59.009 | 1.00th=[ 225], 5.00th=[ 237], 10.00th=[ 243], 20.00th=[ 251], 00:10:59.009 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:10:59.009 | 70.00th=[ 281], 80.00th=[ 293], 90.00th=[ 314], 95.00th=[ 347], 00:10:59.009 | 99.00th=[ 486], 99.50th=[ 498], 99.90th=[ 510], 99.95th=[ 529], 00:10:59.009 | 99.99th=[ 529] 00:10:59.009 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:10:59.009 slat (usec): min=14, max=134, avg=25.15, stdev= 8.74 00:10:59.009 clat (usec): min=102, max=294, avg=198.79, stdev=28.17 00:10:59.009 lat (usec): min=122, max=376, avg=223.94, stdev=30.15 00:10:59.009 clat percentiles (usec): 00:10:59.009 | 1.00th=[ 114], 5.00th=[ 130], 10.00th=[ 172], 20.00th=[ 184], 00:10:59.009 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:10:59.009 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 239], 00:10:59.009 | 99.00th=[ 255], 99.50th=[ 262], 99.90th=[ 273], 99.95th=[ 277], 00:10:59.009 | 99.99th=[ 297] 00:10:59.009 bw ( KiB/s): min= 8192, max= 8192, per=20.02%, avg=8192.00, stdev= 0.00, samples=1 00:10:59.009 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:59.009 lat (usec) : 250=60.78%, 500=39.07%, 750=0.16% 00:10:59.009 cpu : usr=1.10%, sys=7.10%, ctx=3856, majf=0, minf=7 00:10:59.009 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.009 issued rwts: total=1807,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.009 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.009 job3: (groupid=0, jobs=1): err= 0: pid=73362: Sat Jul 13 07:57:04 2024 00:10:59.009 read: IOPS=2796, BW=10.9MiB/s (11.5MB/s)(10.9MiB/1001msec) 00:10:59.009 slat (nsec): min=11500, max=34067, avg=13967.38, stdev=1794.56 00:10:59.009 clat (usec): min=137, max=252, avg=171.99, stdev=13.55 00:10:59.009 lat (usec): min=150, max=266, avg=185.95, stdev=13.96 00:10:59.009 clat percentiles (usec): 00:10:59.009 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 161], 00:10:59.009 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:10:59.009 | 70.00th=[ 180], 80.00th=[ 184], 90.00th=[ 190], 95.00th=[ 196], 00:10:59.009 | 99.00th=[ 208], 99.50th=[ 212], 99.90th=[ 221], 99.95th=[ 221], 00:10:59.009 | 99.99th=[ 253] 00:10:59.009 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:59.009 slat (nsec): min=13347, max=95573, avg=20550.10, stdev=3831.17 00:10:59.009 clat (usec): min=95, max=238, avg=132.57, stdev=12.41 00:10:59.009 lat (usec): min=113, max=333, avg=153.12, stdev=13.40 00:10:59.010 clat percentiles (usec): 00:10:59.010 | 1.00th=[ 105], 5.00th=[ 114], 10.00th=[ 118], 20.00th=[ 123], 00:10:59.010 | 30.00th=[ 126], 40.00th=[ 129], 50.00th=[ 133], 60.00th=[ 137], 00:10:59.010 | 70.00th=[ 139], 80.00th=[ 143], 90.00th=[ 149], 95.00th=[ 153], 00:10:59.010 | 99.00th=[ 165], 99.50th=[ 169], 99.90th=[ 178], 99.95th=[ 190], 00:10:59.010 | 99.99th=[ 239] 00:10:59.010 bw ( KiB/s): min=12288, max=12288, per=30.03%, avg=12288.00, stdev= 0.00, samples=1 00:10:59.010 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:59.010 lat (usec) : 100=0.09%, 250=99.90%, 500=0.02% 00:10:59.010 cpu : usr=2.60%, sys=7.50%, ctx=5878, majf=0, minf=5 00:10:59.010 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:59.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:59.010 issued rwts: total=2799,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:59.010 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:59.010 00:10:59.010 Run status group 0 (all jobs): 00:10:59.010 READ: bw=36.7MiB/s (38.5MB/s), 7049KiB/s-11.9MiB/s (7218kB/s-12.4MB/s), io=36.8MiB (38.5MB), run=1001-1001msec 00:10:59.010 WRITE: bw=40.0MiB/s (41.9MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=40.0MiB (41.9MB), run=1001-1001msec 00:10:59.010 00:10:59.010 Disk stats (read/write): 00:10:59.010 nvme0n1: ios=2610/2688, merge=0/0, ticks=447/356, in_queue=803, util=87.17% 00:10:59.010 nvme0n2: ios=1572/1709, merge=0/0, ticks=460/377, in_queue=837, util=88.22% 00:10:59.010 nvme0n3: ios=1536/1785, merge=0/0, ticks=432/379, in_queue=811, util=89.20% 00:10:59.010 nvme0n4: ios=2453/2560, merge=0/0, ticks=423/365, in_queue=788, util=89.67% 00:10:59.010 07:57:04 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:59.010 [global] 00:10:59.010 thread=1 00:10:59.010 invalidate=1 00:10:59.010 rw=randwrite 00:10:59.010 time_based=1 00:10:59.010 runtime=1 00:10:59.010 ioengine=libaio 00:10:59.010 direct=1 00:10:59.010 bs=4096 00:10:59.010 iodepth=1 00:10:59.010 norandommap=0 00:10:59.010 numjobs=1 00:10:59.010 00:10:59.010 verify_dump=1 00:10:59.010 verify_backlog=512 00:10:59.010 verify_state_save=0 00:10:59.010 do_verify=1 00:10:59.010 verify=crc32c-intel 00:10:59.010 [job0] 00:10:59.010 filename=/dev/nvme0n1 00:10:59.010 [job1] 00:10:59.010 filename=/dev/nvme0n2 00:10:59.010 [job2] 00:10:59.010 filename=/dev/nvme0n3 00:10:59.010 [job3] 00:10:59.010 filename=/dev/nvme0n4 00:10:59.010 Could not set queue depth (nvme0n1) 00:10:59.010 Could not set queue depth (nvme0n2) 00:10:59.010 Could not set queue depth (nvme0n3) 00:10:59.010 Could not set queue depth (nvme0n4) 00:10:59.268 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.268 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.268 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.268 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:59.268 fio-3.35 00:10:59.268 Starting 4 threads 00:11:00.680 00:11:00.680 job0: (groupid=0, jobs=1): err= 0: pid=73414: Sat Jul 13 07:57:06 2024 00:11:00.680 read: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec) 00:11:00.680 slat (usec): min=10, max=439, avg=13.48, stdev= 8.01 00:11:00.680 clat (usec): min=3, max=6467, avg=164.61, stdev=126.51 00:11:00.680 lat (usec): min=132, max=6480, avg=178.09, stdev=126.78 00:11:00.680 clat percentiles (usec): 00:11:00.680 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:11:00.680 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:11:00.680 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 186], 00:11:00.680 | 99.00th=[ 206], 99.50th=[ 219], 99.90th=[ 1467], 99.95th=[ 2212], 00:11:00.680 | 99.99th=[ 6456] 00:11:00.680 write: IOPS=3202, BW=12.5MiB/s (13.1MB/s)(12.5MiB/1001msec); 0 zone resets 00:11:00.680 slat (usec): min=13, max=126, avg=20.26, stdev= 4.01 00:11:00.680 clat (usec): min=87, max=179, avg=117.78, stdev=11.50 00:11:00.680 lat (usec): min=104, max=292, avg=138.04, stdev=12.27 00:11:00.680 clat percentiles (usec): 00:11:00.680 | 1.00th=[ 94], 5.00th=[ 99], 10.00th=[ 103], 20.00th=[ 109], 00:11:00.680 | 30.00th=[ 112], 40.00th=[ 115], 50.00th=[ 118], 60.00th=[ 121], 00:11:00.680 | 70.00th=[ 124], 80.00th=[ 127], 90.00th=[ 133], 95.00th=[ 137], 00:11:00.680 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 167], 99.95th=[ 172], 00:11:00.680 | 99.99th=[ 180] 00:11:00.680 bw ( KiB/s): min=13048, max=13048, per=41.79%, avg=13048.00, stdev= 0.00, samples=1 00:11:00.680 iops : min= 3262, max= 3262, avg=3262.00, stdev= 0.00, samples=1 00:11:00.680 lat (usec) : 4=0.02%, 100=3.04%, 250=96.85%, 750=0.03% 00:11:00.680 lat (msec) : 2=0.03%, 4=0.02%, 10=0.02% 00:11:00.680 cpu : usr=2.40%, sys=8.40%, ctx=6280, majf=0, minf=7 00:11:00.680 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.680 issued rwts: total=3072,3206,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.680 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.680 job1: (groupid=0, jobs=1): err= 0: pid=73415: Sat Jul 13 07:57:06 2024 00:11:00.680 read: IOPS=1528, BW=6114KiB/s (6261kB/s)(6120KiB/1001msec) 00:11:00.680 slat (nsec): min=9804, max=43064, avg=17320.10, stdev=3965.71 00:11:00.680 clat (usec): min=233, max=769, avg=336.26, stdev=37.13 00:11:00.680 lat (usec): min=252, max=788, avg=353.58, stdev=37.94 00:11:00.680 clat percentiles (usec): 00:11:00.680 | 1.00th=[ 265], 5.00th=[ 293], 10.00th=[ 310], 20.00th=[ 318], 00:11:00.680 | 30.00th=[ 326], 40.00th=[ 330], 50.00th=[ 334], 60.00th=[ 338], 00:11:00.680 | 70.00th=[ 343], 80.00th=[ 351], 90.00th=[ 359], 95.00th=[ 375], 00:11:00.680 | 99.00th=[ 474], 99.50th=[ 578], 99.90th=[ 725], 99.95th=[ 766], 00:11:00.680 | 99.99th=[ 766] 00:11:00.680 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:00.680 slat (nsec): min=12929, max=87683, avg=26908.97, stdev=5696.59 00:11:00.680 clat (usec): min=160, max=879, avg=267.52, stdev=36.15 00:11:00.680 lat (usec): min=178, max=904, avg=294.43, stdev=38.81 00:11:00.680 clat percentiles (usec): 00:11:00.680 | 1.00th=[ 178], 5.00th=[ 210], 10.00th=[ 241], 20.00th=[ 251], 00:11:00.680 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 273], 00:11:00.680 | 70.00th=[ 277], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 00:11:00.680 | 99.00th=[ 404], 99.50th=[ 437], 99.90th=[ 486], 99.95th=[ 881], 00:11:00.680 | 99.99th=[ 881] 00:11:00.680 bw ( KiB/s): min= 8192, max= 8192, per=26.24%, avg=8192.00, stdev= 0.00, samples=1 00:11:00.680 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:00.680 lat (usec) : 250=9.95%, 500=89.66%, 750=0.33%, 1000=0.07% 00:11:00.680 cpu : usr=1.40%, sys=6.10%, ctx=3066, majf=0, minf=9 00:11:00.680 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.680 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.680 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.680 issued rwts: total=1530,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.680 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.680 job2: (groupid=0, jobs=1): err= 0: pid=73416: Sat Jul 13 07:57:06 2024 00:11:00.680 read: IOPS=1503, BW=6014KiB/s (6158kB/s)(6020KiB/1001msec) 00:11:00.680 slat (nsec): min=15322, max=59396, avg=22350.73, stdev=4012.74 00:11:00.680 clat (usec): min=229, max=1591, avg=333.69, stdev=55.47 00:11:00.680 lat (usec): min=282, max=1618, avg=356.05, stdev=55.89 00:11:00.680 clat percentiles (usec): 00:11:00.680 | 1.00th=[ 277], 5.00th=[ 293], 10.00th=[ 306], 20.00th=[ 314], 00:11:00.681 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 326], 60.00th=[ 334], 00:11:00.681 | 70.00th=[ 338], 80.00th=[ 347], 90.00th=[ 355], 95.00th=[ 363], 00:11:00.681 | 99.00th=[ 635], 99.50th=[ 660], 99.90th=[ 758], 99.95th=[ 1598], 00:11:00.681 | 99.99th=[ 1598] 00:11:00.681 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:00.681 slat (nsec): min=22767, max=65642, avg=32211.22, stdev=4226.74 00:11:00.681 clat (usec): min=113, max=7682, avg=264.77, stdev=199.39 00:11:00.681 lat (usec): min=142, max=7706, avg=296.99, stdev=199.26 00:11:00.681 clat percentiles (usec): 00:11:00.681 | 1.00th=[ 128], 5.00th=[ 215], 10.00th=[ 233], 20.00th=[ 243], 00:11:00.681 | 30.00th=[ 251], 40.00th=[ 258], 50.00th=[ 262], 60.00th=[ 269], 00:11:00.681 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:11:00.681 | 99.00th=[ 314], 99.50th=[ 359], 99.90th=[ 1729], 99.95th=[ 7701], 00:11:00.681 | 99.99th=[ 7701] 00:11:00.681 bw ( KiB/s): min= 8192, max= 8192, per=26.24%, avg=8192.00, stdev= 0.00, samples=1 00:11:00.681 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:00.681 lat (usec) : 250=14.80%, 500=84.15%, 750=0.82%, 1000=0.07% 00:11:00.681 lat (msec) : 2=0.13%, 10=0.03% 00:11:00.681 cpu : usr=2.00%, sys=6.30%, ctx=3041, majf=0, minf=11 00:11:00.681 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.681 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.681 issued rwts: total=1505,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.681 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.681 job3: (groupid=0, jobs=1): err= 0: pid=73417: Sat Jul 13 07:57:06 2024 00:11:00.681 read: IOPS=1529, BW=6118KiB/s (6265kB/s)(6124KiB/1001msec) 00:11:00.681 slat (nsec): min=9319, max=42483, avg=12598.05, stdev=3046.87 00:11:00.681 clat (usec): min=174, max=778, avg=341.52, stdev=37.81 00:11:00.681 lat (usec): min=211, max=790, avg=354.12, stdev=37.80 00:11:00.681 clat percentiles (usec): 00:11:00.681 | 1.00th=[ 265], 5.00th=[ 293], 10.00th=[ 314], 20.00th=[ 322], 00:11:00.681 | 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 338], 60.00th=[ 347], 00:11:00.681 | 70.00th=[ 351], 80.00th=[ 355], 90.00th=[ 367], 95.00th=[ 375], 00:11:00.681 | 99.00th=[ 490], 99.50th=[ 562], 99.90th=[ 742], 99.95th=[ 783], 00:11:00.681 | 99.99th=[ 783] 00:11:00.681 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:11:00.681 slat (nsec): min=10763, max=72409, avg=19821.37, stdev=4531.49 00:11:00.681 clat (usec): min=154, max=897, avg=275.15, stdev=38.18 00:11:00.681 lat (usec): min=178, max=914, avg=294.97, stdev=39.22 00:11:00.681 clat percentiles (usec): 00:11:00.681 | 1.00th=[ 178], 5.00th=[ 212], 10.00th=[ 247], 20.00th=[ 258], 00:11:00.681 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 277], 60.00th=[ 281], 00:11:00.681 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 314], 00:11:00.681 | 99.00th=[ 420], 99.50th=[ 441], 99.90th=[ 562], 99.95th=[ 898], 00:11:00.681 | 99.99th=[ 898] 00:11:00.681 bw ( KiB/s): min= 8208, max= 8208, per=26.29%, avg=8208.00, stdev= 0.00, samples=1 00:11:00.681 iops : min= 2052, max= 2052, avg=2052.00, stdev= 0.00, samples=1 00:11:00.681 lat (usec) : 250=6.68%, 500=92.86%, 750=0.39%, 1000=0.07% 00:11:00.681 cpu : usr=1.00%, sys=4.30%, ctx=3067, majf=0, minf=20 00:11:00.681 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:00.681 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.681 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:00.681 issued rwts: total=1531,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:00.681 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:00.681 00:11:00.681 Run status group 0 (all jobs): 00:11:00.681 READ: bw=29.8MiB/s (31.3MB/s), 6014KiB/s-12.0MiB/s (6158kB/s-12.6MB/s), io=29.8MiB (31.3MB), run=1001-1001msec 00:11:00.681 WRITE: bw=30.5MiB/s (32.0MB/s), 6138KiB/s-12.5MiB/s (6285kB/s-13.1MB/s), io=30.5MiB (32.0MB), run=1001-1001msec 00:11:00.681 00:11:00.681 Disk stats (read/write): 00:11:00.681 nvme0n1: ios=2610/2911, merge=0/0, ticks=440/359, in_queue=799, util=88.08% 00:11:00.681 nvme0n2: ios=1194/1536, merge=0/0, ticks=403/426, in_queue=829, util=89.10% 00:11:00.681 nvme0n3: ios=1126/1536, merge=0/0, ticks=385/424, in_queue=809, util=88.99% 00:11:00.681 nvme0n4: ios=1145/1536, merge=0/0, ticks=364/389, in_queue=753, util=89.76% 00:11:00.681 07:57:06 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:00.681 [global] 00:11:00.681 thread=1 00:11:00.681 invalidate=1 00:11:00.681 rw=write 00:11:00.681 time_based=1 00:11:00.681 runtime=1 00:11:00.681 ioengine=libaio 00:11:00.681 direct=1 00:11:00.681 bs=4096 00:11:00.681 iodepth=128 00:11:00.681 norandommap=0 00:11:00.681 numjobs=1 00:11:00.681 00:11:00.681 verify_dump=1 00:11:00.681 verify_backlog=512 00:11:00.681 verify_state_save=0 00:11:00.681 do_verify=1 00:11:00.681 verify=crc32c-intel 00:11:00.681 [job0] 00:11:00.681 filename=/dev/nvme0n1 00:11:00.681 [job1] 00:11:00.681 filename=/dev/nvme0n2 00:11:00.681 [job2] 00:11:00.681 filename=/dev/nvme0n3 00:11:00.681 [job3] 00:11:00.681 filename=/dev/nvme0n4 00:11:00.681 Could not set queue depth (nvme0n1) 00:11:00.681 Could not set queue depth (nvme0n2) 00:11:00.681 Could not set queue depth (nvme0n3) 00:11:00.681 Could not set queue depth (nvme0n4) 00:11:00.681 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.681 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.681 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.681 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:00.681 fio-3.35 00:11:00.681 Starting 4 threads 00:11:02.056 00:11:02.056 job0: (groupid=0, jobs=1): err= 0: pid=73465: Sat Jul 13 07:57:07 2024 00:11:02.056 read: IOPS=5626, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec) 00:11:02.056 slat (usec): min=4, max=2614, avg=80.27, stdev=369.25 00:11:02.056 clat (usec): min=7862, max=12998, avg=10849.07, stdev=573.68 00:11:02.056 lat (usec): min=8815, max=14169, avg=10929.34, stdev=457.10 00:11:02.056 clat percentiles (usec): 00:11:02.056 | 1.00th=[ 8586], 5.00th=[10028], 10.00th=[10290], 20.00th=[10421], 00:11:02.056 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10814], 60.00th=[10945], 00:11:02.056 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11469], 95.00th=[11600], 00:11:02.056 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12125], 99.95th=[12256], 00:11:02.056 | 99.99th=[13042] 00:11:02.056 write: IOPS=5986, BW=23.4MiB/s (24.5MB/s)(23.4MiB/1001msec); 0 zone resets 00:11:02.056 slat (usec): min=11, max=2449, avg=84.12, stdev=345.63 00:11:02.056 clat (usec): min=636, max=12969, avg=10913.25, stdev=965.60 00:11:02.056 lat (usec): min=656, max=13007, avg=10997.37, stdev=910.67 00:11:02.056 clat percentiles (usec): 00:11:02.056 | 1.00th=[ 6194], 5.00th=[10028], 10.00th=[10421], 20.00th=[10552], 00:11:02.056 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:11:02.056 | 70.00th=[11338], 80.00th=[11469], 90.00th=[11731], 95.00th=[11863], 00:11:02.056 | 99.00th=[11994], 99.50th=[12125], 99.90th=[12125], 99.95th=[12649], 00:11:02.056 | 99.99th=[12911] 00:11:02.056 bw ( KiB/s): min=24576, max=24576, per=36.49%, avg=24576.00, stdev= 0.00, samples=1 00:11:02.056 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:11:02.056 lat (usec) : 750=0.05%, 1000=0.02% 00:11:02.056 lat (msec) : 4=0.28%, 10=4.28%, 20=95.37% 00:11:02.056 cpu : usr=3.60%, sys=17.00%, ctx=377, majf=0, minf=11 00:11:02.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:02.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.056 issued rwts: total=5632,5992,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.056 job1: (groupid=0, jobs=1): err= 0: pid=73466: Sat Jul 13 07:57:07 2024 00:11:02.056 read: IOPS=2549, BW=9.96MiB/s (10.4MB/s)(10.0MiB/1004msec) 00:11:02.056 slat (usec): min=8, max=9641, avg=189.84, stdev=811.68 00:11:02.056 clat (usec): min=15534, max=35176, avg=24512.97, stdev=3526.13 00:11:02.056 lat (usec): min=15547, max=35201, avg=24702.81, stdev=3535.20 00:11:02.056 clat percentiles (usec): 00:11:02.056 | 1.00th=[16319], 5.00th=[18744], 10.00th=[20055], 20.00th=[21890], 00:11:02.056 | 30.00th=[22938], 40.00th=[23200], 50.00th=[23987], 60.00th=[25035], 00:11:02.056 | 70.00th=[26346], 80.00th=[27657], 90.00th=[29492], 95.00th=[30278], 00:11:02.056 | 99.00th=[32375], 99.50th=[32900], 99.90th=[34341], 99.95th=[34341], 00:11:02.056 | 99.99th=[35390] 00:11:02.056 write: IOPS=3022, BW=11.8MiB/s (12.4MB/s)(11.9MiB/1004msec); 0 zone resets 00:11:02.056 slat (usec): min=10, max=7109, avg=161.61, stdev=715.22 00:11:02.056 clat (usec): min=3205, max=35891, avg=20971.42, stdev=5094.30 00:11:02.056 lat (usec): min=3227, max=35923, avg=21133.04, stdev=5117.76 00:11:02.056 clat percentiles (usec): 00:11:02.056 | 1.00th=[ 3949], 5.00th=[13829], 10.00th=[15664], 20.00th=[17171], 00:11:02.056 | 30.00th=[18220], 40.00th=[19006], 50.00th=[20055], 60.00th=[21627], 00:11:02.056 | 70.00th=[23725], 80.00th=[25297], 90.00th=[27132], 95.00th=[30540], 00:11:02.056 | 99.00th=[32375], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:11:02.056 | 99.99th=[35914] 00:11:02.056 bw ( KiB/s): min=10954, max=12288, per=17.25%, avg=11621.00, stdev=943.28, samples=2 00:11:02.056 iops : min= 2738, max= 3072, avg=2905.00, stdev=236.17, samples=2 00:11:02.056 lat (msec) : 4=0.59%, 10=0.59%, 20=29.83%, 50=68.99% 00:11:02.056 cpu : usr=2.29%, sys=8.57%, ctx=658, majf=0, minf=14 00:11:02.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:11:02.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.056 issued rwts: total=2560,3035,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.056 job2: (groupid=0, jobs=1): err= 0: pid=73467: Sat Jul 13 07:57:07 2024 00:11:02.056 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:11:02.056 slat (usec): min=4, max=8731, avg=193.67, stdev=858.46 00:11:02.056 clat (usec): min=16717, max=37146, avg=24832.20, stdev=3137.41 00:11:02.056 lat (usec): min=16731, max=37185, avg=25025.87, stdev=3151.66 00:11:02.056 clat percentiles (usec): 00:11:02.056 | 1.00th=[17957], 5.00th=[20317], 10.00th=[21365], 20.00th=[22414], 00:11:02.056 | 30.00th=[22938], 40.00th=[23725], 50.00th=[24249], 60.00th=[24773], 00:11:02.056 | 70.00th=[26608], 80.00th=[27395], 90.00th=[28967], 95.00th=[30278], 00:11:02.056 | 99.00th=[33817], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:11:02.056 | 99.99th=[36963] 00:11:02.056 write: IOPS=2723, BW=10.6MiB/s (11.2MB/s)(10.7MiB/1002msec); 0 zone resets 00:11:02.056 slat (usec): min=9, max=6921, avg=177.24, stdev=729.96 00:11:02.056 clat (usec): min=285, max=37725, avg=22801.24, stdev=4903.47 00:11:02.056 lat (usec): min=4229, max=37749, avg=22978.48, stdev=4916.52 00:11:02.056 clat percentiles (usec): 00:11:02.056 | 1.00th=[ 8848], 5.00th=[16712], 10.00th=[17171], 20.00th=[19268], 00:11:02.056 | 30.00th=[20579], 40.00th=[21627], 50.00th=[22676], 60.00th=[23725], 00:11:02.056 | 70.00th=[25035], 80.00th=[25822], 90.00th=[27657], 95.00th=[31589], 00:11:02.056 | 99.00th=[36963], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:11:02.056 | 99.99th=[37487] 00:11:02.056 bw ( KiB/s): min= 8913, max=11912, per=15.46%, avg=10412.50, stdev=2120.61, samples=2 00:11:02.056 iops : min= 2228, max= 2978, avg=2603.00, stdev=530.33, samples=2 00:11:02.056 lat (usec) : 500=0.02% 00:11:02.056 lat (msec) : 10=0.83%, 20=14.39%, 50=84.76% 00:11:02.056 cpu : usr=2.30%, sys=7.89%, ctx=655, majf=0, minf=9 00:11:02.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:02.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.056 issued rwts: total=2560,2729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.056 job3: (groupid=0, jobs=1): err= 0: pid=73468: Sat Jul 13 07:57:07 2024 00:11:02.056 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:11:02.056 slat (usec): min=5, max=6209, avg=94.16, stdev=491.82 00:11:02.056 clat (usec): min=5595, max=19043, avg=12271.34, stdev=1565.39 00:11:02.056 lat (usec): min=5607, max=19076, avg=12365.50, stdev=1598.78 00:11:02.056 clat percentiles (usec): 00:11:02.056 | 1.00th=[ 8094], 5.00th=[ 9503], 10.00th=[10552], 20.00th=[11338], 00:11:02.056 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649], 00:11:02.056 | 70.00th=[12911], 80.00th=[13173], 90.00th=[13960], 95.00th=[14484], 00:11:02.056 | 99.00th=[17171], 99.50th=[17695], 99.90th=[19006], 99.95th=[19006], 00:11:02.056 | 99.99th=[19006] 00:11:02.056 write: IOPS=5133, BW=20.1MiB/s (21.0MB/s)(20.1MiB/1003msec); 0 zone resets 00:11:02.056 slat (usec): min=10, max=5262, avg=92.35, stdev=461.08 00:11:02.056 clat (usec): min=1969, max=19456, avg=12408.55, stdev=1551.84 00:11:02.056 lat (usec): min=1989, max=19556, avg=12500.91, stdev=1610.76 00:11:02.056 clat percentiles (usec): 00:11:02.056 | 1.00th=[ 8160], 5.00th=[10159], 10.00th=[11076], 20.00th=[11600], 00:11:02.056 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12387], 60.00th=[12649], 00:11:02.056 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13829], 95.00th=[14353], 00:11:02.056 | 99.00th=[17433], 99.50th=[18220], 99.90th=[19530], 99.95th=[19530], 00:11:02.056 | 99.99th=[19530] 00:11:02.056 bw ( KiB/s): min=20439, max=20521, per=30.41%, avg=20480.00, stdev=57.98, samples=2 00:11:02.056 iops : min= 5109, max= 5130, avg=5119.50, stdev=14.85, samples=2 00:11:02.056 lat (msec) : 2=0.02%, 4=0.11%, 10=5.04%, 20=94.83% 00:11:02.056 cpu : usr=4.49%, sys=14.97%, ctx=459, majf=0, minf=11 00:11:02.056 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:02.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:02.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:02.056 issued rwts: total=5120,5149,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:02.056 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:02.056 00:11:02.056 Run status group 0 (all jobs): 00:11:02.056 READ: bw=61.8MiB/s (64.8MB/s), 9.96MiB/s-22.0MiB/s (10.4MB/s-23.0MB/s), io=62.0MiB (65.0MB), run=1001-1004msec 00:11:02.056 WRITE: bw=65.8MiB/s (69.0MB/s), 10.6MiB/s-23.4MiB/s (11.2MB/s-24.5MB/s), io=66.0MiB (69.2MB), run=1001-1004msec 00:11:02.056 00:11:02.056 Disk stats (read/write): 00:11:02.056 nvme0n1: ios=4947/5120, merge=0/0, ticks=11591/11967, in_queue=23558, util=88.08% 00:11:02.056 nvme0n2: ios=2266/2560, merge=0/0, ticks=17865/15549, in_queue=33414, util=89.18% 00:11:02.056 nvme0n3: ios=2048/2521, merge=0/0, ticks=15766/17741, in_queue=33507, util=88.79% 00:11:02.056 nvme0n4: ios=4177/4608, merge=0/0, ticks=24755/24932, in_queue=49687, util=89.86% 00:11:02.056 07:57:07 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:02.056 [global] 00:11:02.056 thread=1 00:11:02.056 invalidate=1 00:11:02.056 rw=randwrite 00:11:02.056 time_based=1 00:11:02.056 runtime=1 00:11:02.056 ioengine=libaio 00:11:02.056 direct=1 00:11:02.056 bs=4096 00:11:02.056 iodepth=128 00:11:02.056 norandommap=0 00:11:02.056 numjobs=1 00:11:02.056 00:11:02.056 verify_dump=1 00:11:02.056 verify_backlog=512 00:11:02.056 verify_state_save=0 00:11:02.056 do_verify=1 00:11:02.056 verify=crc32c-intel 00:11:02.056 [job0] 00:11:02.056 filename=/dev/nvme0n1 00:11:02.056 [job1] 00:11:02.056 filename=/dev/nvme0n2 00:11:02.056 [job2] 00:11:02.056 filename=/dev/nvme0n3 00:11:02.056 [job3] 00:11:02.056 filename=/dev/nvme0n4 00:11:02.056 Could not set queue depth (nvme0n1) 00:11:02.056 Could not set queue depth (nvme0n2) 00:11:02.056 Could not set queue depth (nvme0n3) 00:11:02.056 Could not set queue depth (nvme0n4) 00:11:02.056 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.056 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.056 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.056 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:02.056 fio-3.35 00:11:02.056 Starting 4 threads 00:11:03.432 00:11:03.432 job0: (groupid=0, jobs=1): err= 0: pid=73515: Sat Jul 13 07:57:08 2024 00:11:03.432 read: IOPS=2293, BW=9172KiB/s (9393kB/s)(9200KiB/1003msec) 00:11:03.432 slat (usec): min=10, max=14642, avg=233.95, stdev=984.06 00:11:03.432 clat (usec): min=792, max=52835, avg=29666.85, stdev=9431.65 00:11:03.432 lat (usec): min=4243, max=52860, avg=29900.79, stdev=9472.45 00:11:03.432 clat percentiles (usec): 00:11:03.432 | 1.00th=[14615], 5.00th=[18220], 10.00th=[18482], 20.00th=[19006], 00:11:03.432 | 30.00th=[23462], 40.00th=[25822], 50.00th=[28967], 60.00th=[33424], 00:11:03.432 | 70.00th=[34866], 80.00th=[36439], 90.00th=[43779], 95.00th=[46924], 00:11:03.432 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[52691], 00:11:03.432 | 99.99th=[52691] 00:11:03.432 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:11:03.432 slat (usec): min=11, max=13147, avg=172.89, stdev=904.16 00:11:03.432 clat (usec): min=12085, max=39204, avg=22814.96, stdev=5644.88 00:11:03.432 lat (usec): min=12109, max=39225, avg=22987.85, stdev=5669.57 00:11:03.432 clat percentiles (usec): 00:11:03.433 | 1.00th=[13042], 5.00th=[13435], 10.00th=[15401], 20.00th=[18220], 00:11:03.433 | 30.00th=[19268], 40.00th=[21103], 50.00th=[23725], 60.00th=[23987], 00:11:03.433 | 70.00th=[24511], 80.00th=[28181], 90.00th=[29492], 95.00th=[32900], 00:11:03.433 | 99.00th=[36963], 99.50th=[36963], 99.90th=[38011], 99.95th=[39060], 00:11:03.433 | 99.99th=[39060] 00:11:03.433 bw ( KiB/s): min= 8192, max=12312, per=17.95%, avg=10252.00, stdev=2913.28, samples=2 00:11:03.433 iops : min= 2048, max= 3078, avg=2563.00, stdev=728.32, samples=2 00:11:03.433 lat (usec) : 1000=0.02% 00:11:03.433 lat (msec) : 10=0.14%, 20=30.72%, 50=68.42%, 100=0.70% 00:11:03.433 cpu : usr=2.10%, sys=8.08%, ctx=295, majf=0, minf=11 00:11:03.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:11:03.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.433 issued rwts: total=2300,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.433 job1: (groupid=0, jobs=1): err= 0: pid=73516: Sat Jul 13 07:57:08 2024 00:11:03.433 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.0MiB/1002msec) 00:11:03.433 slat (usec): min=4, max=12183, avg=215.52, stdev=1223.36 00:11:03.433 clat (usec): min=13020, max=48312, avg=28265.83, stdev=9354.93 00:11:03.433 lat (usec): min=15452, max=48334, avg=28481.35, stdev=9344.07 00:11:03.433 clat percentiles (usec): 00:11:03.433 | 1.00th=[15533], 5.00th=[18744], 10.00th=[20055], 20.00th=[21627], 00:11:03.433 | 30.00th=[22152], 40.00th=[22414], 50.00th=[24511], 60.00th=[25297], 00:11:03.433 | 70.00th=[31065], 80.00th=[36963], 90.00th=[46400], 95.00th=[47973], 00:11:03.433 | 99.00th=[48497], 99.50th=[48497], 99.90th=[48497], 99.95th=[48497], 00:11:03.433 | 99.99th=[48497] 00:11:03.433 write: IOPS=2747, BW=10.7MiB/s (11.3MB/s)(10.8MiB/1002msec); 0 zone resets 00:11:03.433 slat (usec): min=10, max=12829, avg=154.93, stdev=799.66 00:11:03.433 clat (usec): min=351, max=36053, avg=19325.66, stdev=5578.73 00:11:03.433 lat (usec): min=2727, max=36083, avg=19480.59, stdev=5561.80 00:11:03.433 clat percentiles (usec): 00:11:03.433 | 1.00th=[ 3425], 5.00th=[13304], 10.00th=[15008], 20.00th=[15533], 00:11:03.433 | 30.00th=[15926], 40.00th=[16188], 50.00th=[17433], 60.00th=[19006], 00:11:03.433 | 70.00th=[21365], 80.00th=[25560], 90.00th=[27919], 95.00th=[28967], 00:11:03.433 | 99.00th=[35914], 99.50th=[35914], 99.90th=[35914], 99.95th=[35914], 00:11:03.433 | 99.99th=[35914] 00:11:03.433 bw ( KiB/s): min= 9216, max=11807, per=18.41%, avg=10511.50, stdev=1832.11, samples=2 00:11:03.433 iops : min= 2304, max= 2951, avg=2627.50, stdev=457.50, samples=2 00:11:03.433 lat (usec) : 500=0.02% 00:11:03.433 lat (msec) : 4=0.60%, 10=0.49%, 20=36.66%, 50=62.22% 00:11:03.433 cpu : usr=2.50%, sys=8.39%, ctx=167, majf=0, minf=17 00:11:03.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:11:03.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.433 issued rwts: total=2560,2753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.433 job2: (groupid=0, jobs=1): err= 0: pid=73517: Sat Jul 13 07:57:08 2024 00:11:03.433 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:11:03.433 slat (usec): min=4, max=3280, avg=81.50, stdev=363.52 00:11:03.433 clat (usec): min=8144, max=15347, avg=11004.84, stdev=1278.22 00:11:03.433 lat (usec): min=8165, max=15828, avg=11086.34, stdev=1289.45 00:11:03.433 clat percentiles (usec): 00:11:03.433 | 1.00th=[ 8717], 5.00th=[ 9241], 10.00th=[ 9503], 20.00th=[ 9896], 00:11:03.433 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10814], 60.00th=[11207], 00:11:03.433 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12780], 95.00th=[13435], 00:11:03.433 | 99.00th=[14222], 99.50th=[14484], 99.90th=[14877], 99.95th=[14877], 00:11:03.433 | 99.99th=[15401] 00:11:03.433 write: IOPS=5916, BW=23.1MiB/s (24.2MB/s)(23.2MiB/1003msec); 0 zone resets 00:11:03.433 slat (usec): min=8, max=3678, avg=84.20, stdev=397.63 00:11:03.433 clat (usec): min=144, max=15771, avg=10915.62, stdev=1252.09 00:11:03.433 lat (usec): min=2801, max=15808, avg=10999.82, stdev=1305.34 00:11:03.433 clat percentiles (usec): 00:11:03.433 | 1.00th=[ 7242], 5.00th=[ 9503], 10.00th=[ 9896], 20.00th=[10159], 00:11:03.433 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10683], 60.00th=[10945], 00:11:03.433 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:11:03.433 | 99.00th=[13435], 99.50th=[14877], 99.90th=[15533], 99.95th=[15664], 00:11:03.433 | 99.99th=[15795] 00:11:03.433 bw ( KiB/s): min=22776, max=23672, per=40.67%, avg=23224.00, stdev=633.57, samples=2 00:11:03.433 iops : min= 5694, max= 5918, avg=5806.00, stdev=158.39, samples=2 00:11:03.433 lat (usec) : 250=0.01% 00:11:03.433 lat (msec) : 4=0.36%, 10=17.98%, 20=81.64% 00:11:03.433 cpu : usr=5.09%, sys=15.57%, ctx=433, majf=0, minf=9 00:11:03.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:11:03.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.433 issued rwts: total=5632,5934,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.433 job3: (groupid=0, jobs=1): err= 0: pid=73518: Sat Jul 13 07:57:08 2024 00:11:03.433 read: IOPS=2804, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1003msec) 00:11:03.433 slat (usec): min=4, max=11997, avg=181.22, stdev=866.64 00:11:03.433 clat (usec): min=304, max=49679, avg=22862.52, stdev=12504.90 00:11:03.433 lat (usec): min=9580, max=50796, avg=23043.74, stdev=12587.32 00:11:03.433 clat percentiles (usec): 00:11:03.433 | 1.00th=[ 9765], 5.00th=[11731], 10.00th=[11863], 20.00th=[11994], 00:11:03.433 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12911], 60.00th=[29230], 00:11:03.433 | 70.00th=[33424], 80.00th=[35914], 90.00th=[41681], 95.00th=[44827], 00:11:03.433 | 99.00th=[46924], 99.50th=[46924], 99.90th=[49546], 99.95th=[49546], 00:11:03.433 | 99.99th=[49546] 00:11:03.433 write: IOPS=3062, BW=12.0MiB/s (12.5MB/s)(12.0MiB/1003msec); 0 zone resets 00:11:03.433 slat (usec): min=10, max=9189, avg=152.48, stdev=756.98 00:11:03.433 clat (usec): min=9211, max=43994, avg=20262.02, stdev=8167.81 00:11:03.433 lat (usec): min=11107, max=44663, avg=20414.50, stdev=8202.66 00:11:03.433 clat percentiles (usec): 00:11:03.433 | 1.00th=[10421], 5.00th=[11994], 10.00th=[12387], 20.00th=[12649], 00:11:03.433 | 30.00th=[12780], 40.00th=[12911], 50.00th=[18744], 60.00th=[23725], 00:11:03.433 | 70.00th=[24249], 80.00th=[27919], 90.00th=[30278], 95.00th=[36439], 00:11:03.433 | 99.00th=[40109], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:11:03.433 | 99.99th=[43779] 00:11:03.433 bw ( KiB/s): min= 8192, max=16416, per=21.55%, avg=12304.00, stdev=5815.25, samples=2 00:11:03.433 iops : min= 2048, max= 4104, avg=3076.00, stdev=1453.81, samples=2 00:11:03.433 lat (usec) : 500=0.02% 00:11:03.433 lat (msec) : 10=0.99%, 20=53.31%, 50=45.69% 00:11:03.433 cpu : usr=3.39%, sys=7.68%, ctx=371, majf=0, minf=13 00:11:03.433 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:11:03.433 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:03.433 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:03.433 issued rwts: total=2813,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:03.433 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:03.433 00:11:03.433 Run status group 0 (all jobs): 00:11:03.433 READ: bw=51.8MiB/s (54.3MB/s), 9172KiB/s-21.9MiB/s (9393kB/s-23.0MB/s), io=52.0MiB (54.5MB), run=1002-1003msec 00:11:03.433 WRITE: bw=55.8MiB/s (58.5MB/s), 9.97MiB/s-23.1MiB/s (10.5MB/s-24.2MB/s), io=55.9MiB (58.7MB), run=1002-1003msec 00:11:03.433 00:11:03.433 Disk stats (read/write): 00:11:03.433 nvme0n1: ios=2098/2146, merge=0/0, ticks=19754/13752, in_queue=33506, util=87.27% 00:11:03.433 nvme0n2: ios=2156/2560, merge=0/0, ticks=14096/11193, in_queue=25289, util=88.47% 00:11:03.433 nvme0n3: ios=4712/5120, merge=0/0, ticks=16198/15612, in_queue=31810, util=89.25% 00:11:03.433 nvme0n4: ios=2560/2689, merge=0/0, ticks=16432/13571, in_queue=30003, util=88.66% 00:11:03.433 07:57:08 -- target/fio.sh@55 -- # sync 00:11:03.433 07:57:08 -- target/fio.sh@59 -- # fio_pid=73525 00:11:03.433 07:57:08 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:03.433 07:57:08 -- target/fio.sh@61 -- # sleep 3 00:11:03.433 [global] 00:11:03.433 thread=1 00:11:03.433 invalidate=1 00:11:03.433 rw=read 00:11:03.433 time_based=1 00:11:03.433 runtime=10 00:11:03.433 ioengine=libaio 00:11:03.433 direct=1 00:11:03.433 bs=4096 00:11:03.433 iodepth=1 00:11:03.433 norandommap=1 00:11:03.433 numjobs=1 00:11:03.433 00:11:03.433 [job0] 00:11:03.433 filename=/dev/nvme0n1 00:11:03.433 [job1] 00:11:03.433 filename=/dev/nvme0n2 00:11:03.433 [job2] 00:11:03.433 filename=/dev/nvme0n3 00:11:03.433 [job3] 00:11:03.433 filename=/dev/nvme0n4 00:11:03.433 Could not set queue depth (nvme0n1) 00:11:03.433 Could not set queue depth (nvme0n2) 00:11:03.433 Could not set queue depth (nvme0n3) 00:11:03.433 Could not set queue depth (nvme0n4) 00:11:03.433 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.433 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.433 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.433 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.433 fio-3.35 00:11:03.433 Starting 4 threads 00:11:06.716 07:57:11 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:06.716 fio: pid=73568, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:06.716 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=62545920, buflen=4096 00:11:06.716 07:57:12 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:06.716 fio: pid=73567, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:06.716 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=55529472, buflen=4096 00:11:06.716 07:57:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.716 07:57:12 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:06.974 fio: pid=73565, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:06.974 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=7802880, buflen=4096 00:11:06.974 07:57:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:06.974 07:57:12 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:07.232 fio: pid=73566, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:07.232 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=65478656, buflen=4096 00:11:07.232 00:11:07.232 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=73565: Sat Jul 13 07:57:12 2024 00:11:07.232 read: IOPS=5286, BW=20.6MiB/s (21.7MB/s)(71.4MiB/3460msec) 00:11:07.232 slat (usec): min=8, max=11880, avg=16.04, stdev=144.99 00:11:07.232 clat (usec): min=123, max=2720, avg=171.77, stdev=46.57 00:11:07.232 lat (usec): min=134, max=12063, avg=187.81, stdev=153.30 00:11:07.232 clat percentiles (usec): 00:11:07.232 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:11:07.232 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 169], 00:11:07.232 | 70.00th=[ 176], 80.00th=[ 184], 90.00th=[ 204], 95.00th=[ 229], 00:11:07.232 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 594], 99.95th=[ 906], 00:11:07.232 | 99.99th=[ 1942] 00:11:07.232 bw ( KiB/s): min=21560, max=22592, per=32.33%, avg=22094.67, stdev=455.38, samples=6 00:11:07.232 iops : min= 5388, max= 5648, avg=5523.67, stdev=114.68, samples=6 00:11:07.232 lat (usec) : 250=96.16%, 500=3.69%, 750=0.07%, 1000=0.03% 00:11:07.232 lat (msec) : 2=0.04%, 4=0.01% 00:11:07.232 cpu : usr=1.10%, sys=6.82%, ctx=18299, majf=0, minf=1 00:11:07.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.232 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.232 issued rwts: total=18290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.232 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=73566: Sat Jul 13 07:57:12 2024 00:11:07.232 read: IOPS=4329, BW=16.9MiB/s (17.7MB/s)(62.4MiB/3693msec) 00:11:07.232 slat (usec): min=7, max=15088, avg=16.81, stdev=193.37 00:11:07.232 clat (usec): min=100, max=3580, avg=212.77, stdev=77.89 00:11:07.232 lat (usec): min=134, max=15302, avg=229.58, stdev=208.77 00:11:07.232 clat percentiles (usec): 00:11:07.232 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 157], 00:11:07.232 | 30.00th=[ 165], 40.00th=[ 180], 50.00th=[ 212], 60.00th=[ 239], 00:11:07.232 | 70.00th=[ 249], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 285], 00:11:07.232 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 1139], 99.95th=[ 1762], 00:11:07.232 | 99.99th=[ 2835] 00:11:07.232 bw ( KiB/s): min=14081, max=22528, per=25.36%, avg=17335.14, stdev=3575.85, samples=7 00:11:07.232 iops : min= 3520, max= 5632, avg=4333.71, stdev=894.01, samples=7 00:11:07.232 lat (usec) : 250=70.64%, 500=29.17%, 750=0.04%, 1000=0.01% 00:11:07.232 lat (msec) : 2=0.08%, 4=0.04% 00:11:07.232 cpu : usr=1.27%, sys=5.31%, ctx=16006, majf=0, minf=1 00:11:07.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.232 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.232 issued rwts: total=15987,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.232 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=73567: Sat Jul 13 07:57:12 2024 00:11:07.232 read: IOPS=4209, BW=16.4MiB/s (17.2MB/s)(53.0MiB/3221msec) 00:11:07.232 slat (usec): min=7, max=7822, avg=16.46, stdev=92.62 00:11:07.232 clat (usec): min=138, max=3598, avg=219.56, stdev=59.01 00:11:07.232 lat (usec): min=153, max=8140, avg=236.02, stdev=110.12 00:11:07.232 clat percentiles (usec): 00:11:07.232 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 169], 00:11:07.232 | 30.00th=[ 180], 40.00th=[ 198], 50.00th=[ 231], 60.00th=[ 243], 00:11:07.232 | 70.00th=[ 251], 80.00th=[ 262], 90.00th=[ 273], 95.00th=[ 281], 00:11:07.232 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 388], 99.95th=[ 652], 00:11:07.232 | 99.99th=[ 2671] 00:11:07.232 bw ( KiB/s): min=14680, max=21080, per=25.06%, avg=17125.33, stdev=3069.34, samples=6 00:11:07.232 iops : min= 3670, max= 5270, avg=4281.33, stdev=767.34, samples=6 00:11:07.232 lat (usec) : 250=68.93%, 500=31.00%, 750=0.03%, 1000=0.01% 00:11:07.232 lat (msec) : 2=0.01%, 4=0.01% 00:11:07.232 cpu : usr=1.46%, sys=6.09%, ctx=13561, majf=0, minf=1 00:11:07.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.232 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.232 issued rwts: total=13558,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.232 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=73568: Sat Jul 13 07:57:12 2024 00:11:07.232 read: IOPS=5187, BW=20.3MiB/s (21.2MB/s)(59.6MiB/2944msec) 00:11:07.232 slat (nsec): min=10802, max=88283, avg=14537.92, stdev=3578.62 00:11:07.232 clat (usec): min=129, max=964, avg=176.87, stdev=19.97 00:11:07.232 lat (usec): min=141, max=979, avg=191.41, stdev=20.68 00:11:07.232 clat percentiles (usec): 00:11:07.232 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 157], 20.00th=[ 161], 00:11:07.232 | 30.00th=[ 167], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 180], 00:11:07.232 | 70.00th=[ 184], 80.00th=[ 190], 90.00th=[ 200], 95.00th=[ 212], 00:11:07.232 | 99.00th=[ 235], 99.50th=[ 241], 99.90th=[ 262], 99.95th=[ 281], 00:11:07.232 | 99.99th=[ 570] 00:11:07.232 bw ( KiB/s): min=20896, max=21128, per=30.75%, avg=21017.60, stdev=82.25, samples=5 00:11:07.232 iops : min= 5224, max= 5282, avg=5254.40, stdev=20.56, samples=5 00:11:07.232 lat (usec) : 250=99.74%, 500=0.24%, 750=0.01%, 1000=0.01% 00:11:07.232 cpu : usr=1.53%, sys=6.86%, ctx=15272, majf=0, minf=1 00:11:07.232 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:07.232 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.232 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:07.232 issued rwts: total=15271,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:07.232 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:07.232 00:11:07.232 Run status group 0 (all jobs): 00:11:07.232 READ: bw=66.7MiB/s (70.0MB/s), 16.4MiB/s-20.6MiB/s (17.2MB/s-21.7MB/s), io=246MiB (258MB), run=2944-3693msec 00:11:07.232 00:11:07.232 Disk stats (read/write): 00:11:07.232 nvme0n1: ios=17891/0, merge=0/0, ticks=3105/0, in_queue=3105, util=95.36% 00:11:07.232 nvme0n2: ios=15555/0, merge=0/0, ticks=3221/0, in_queue=3221, util=95.16% 00:11:07.232 nvme0n3: ios=13204/0, merge=0/0, ticks=2885/0, in_queue=2885, util=96.43% 00:11:07.232 nvme0n4: ios=14953/0, merge=0/0, ticks=2684/0, in_queue=2684, util=96.76% 00:11:07.232 07:57:12 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.232 07:57:12 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:07.488 07:57:13 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.488 07:57:13 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:07.746 07:57:13 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:07.746 07:57:13 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:08.004 07:57:13 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.004 07:57:13 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:08.262 07:57:13 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:08.262 07:57:13 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:08.521 07:57:14 -- target/fio.sh@69 -- # fio_status=0 00:11:08.521 07:57:14 -- target/fio.sh@70 -- # wait 73525 00:11:08.521 07:57:14 -- target/fio.sh@70 -- # fio_status=4 00:11:08.521 07:57:14 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.521 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.521 07:57:14 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.521 07:57:14 -- common/autotest_common.sh@1198 -- # local i=0 00:11:08.521 07:57:14 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:11:08.521 07:57:14 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.521 07:57:14 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:11:08.521 07:57:14 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.521 nvmf hotplug test: fio failed as expected 00:11:08.521 07:57:14 -- common/autotest_common.sh@1210 -- # return 0 00:11:08.521 07:57:14 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:08.521 07:57:14 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:08.521 07:57:14 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.780 07:57:14 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:08.780 07:57:14 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:08.780 07:57:14 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:08.780 07:57:14 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:08.780 07:57:14 -- target/fio.sh@91 -- # nvmftestfini 00:11:08.780 07:57:14 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:08.780 07:57:14 -- nvmf/common.sh@116 -- # sync 00:11:08.780 07:57:14 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:08.780 07:57:14 -- nvmf/common.sh@119 -- # set +e 00:11:08.780 07:57:14 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:08.780 07:57:14 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:08.780 rmmod nvme_tcp 00:11:08.780 rmmod nvme_fabrics 00:11:08.780 rmmod nvme_keyring 00:11:08.780 07:57:14 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:08.780 07:57:14 -- nvmf/common.sh@123 -- # set -e 00:11:08.780 07:57:14 -- nvmf/common.sh@124 -- # return 0 00:11:08.780 07:57:14 -- nvmf/common.sh@477 -- # '[' -n 73222 ']' 00:11:08.780 07:57:14 -- nvmf/common.sh@478 -- # killprocess 73222 00:11:08.780 07:57:14 -- common/autotest_common.sh@926 -- # '[' -z 73222 ']' 00:11:08.780 07:57:14 -- common/autotest_common.sh@930 -- # kill -0 73222 00:11:08.780 07:57:14 -- common/autotest_common.sh@931 -- # uname 00:11:08.780 07:57:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:08.780 07:57:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73222 00:11:08.780 killing process with pid 73222 00:11:08.780 07:57:14 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:11:08.780 07:57:14 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:11:08.780 07:57:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73222' 00:11:08.780 07:57:14 -- common/autotest_common.sh@945 -- # kill 73222 00:11:08.780 07:57:14 -- common/autotest_common.sh@950 -- # wait 73222 00:11:09.039 07:57:14 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:09.039 07:57:14 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:09.039 07:57:14 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:09.039 07:57:14 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:09.039 07:57:14 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:09.039 07:57:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.039 07:57:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.039 07:57:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.039 07:57:14 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:09.039 00:11:09.039 real 0m18.942s 00:11:09.039 user 1m11.017s 00:11:09.039 sys 0m10.662s 00:11:09.039 07:57:14 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:09.039 07:57:14 -- common/autotest_common.sh@10 -- # set +x 00:11:09.039 ************************************ 00:11:09.039 END TEST nvmf_fio_target 00:11:09.039 ************************************ 00:11:09.039 07:57:14 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:09.039 07:57:14 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:09.039 07:57:14 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:09.039 07:57:14 -- common/autotest_common.sh@10 -- # set +x 00:11:09.039 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:11:09.039 ************************************ 00:11:09.039 START TEST nvmf_bdevio 00:11:09.039 ************************************ 00:11:09.039 07:57:14 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:09.039 * Looking for test storage... 00:11:09.039 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:09.039 07:57:14 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:09.039 07:57:14 -- nvmf/common.sh@7 -- # uname -s 00:11:09.039 07:57:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.039 07:57:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.040 07:57:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.040 07:57:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.040 07:57:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.040 07:57:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.040 07:57:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.040 07:57:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.040 07:57:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.040 07:57:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.040 07:57:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:11:09.040 07:57:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:11:09.040 07:57:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.040 07:57:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.040 07:57:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:09.040 07:57:14 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:09.040 07:57:14 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.040 07:57:14 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.040 07:57:14 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.040 07:57:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.040 07:57:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.040 07:57:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.040 07:57:14 -- paths/export.sh@5 -- # export PATH 00:11:09.040 07:57:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.040 07:57:14 -- nvmf/common.sh@46 -- # : 0 00:11:09.040 07:57:14 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:09.040 07:57:14 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:09.040 07:57:14 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:09.040 07:57:14 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.040 07:57:14 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.040 07:57:14 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:09.040 07:57:14 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:09.040 07:57:14 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:09.040 07:57:14 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:09.040 07:57:14 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:09.040 07:57:14 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:09.040 07:57:14 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:09.040 07:57:14 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.040 07:57:14 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:09.040 07:57:14 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:09.040 07:57:14 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:09.040 07:57:14 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.040 07:57:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:09.040 07:57:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.298 07:57:14 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:09.298 07:57:14 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:09.298 07:57:14 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:09.298 07:57:14 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:09.298 07:57:14 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:09.298 07:57:14 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:09.298 07:57:14 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.298 07:57:14 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.298 07:57:14 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:09.298 07:57:14 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:09.298 07:57:14 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:09.298 07:57:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:09.298 07:57:14 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:09.298 07:57:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.298 07:57:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:09.298 07:57:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:09.298 07:57:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:09.298 07:57:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:09.298 07:57:14 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:09.298 07:57:14 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:09.298 Cannot find device "nvmf_tgt_br" 00:11:09.298 07:57:14 -- nvmf/common.sh@154 -- # true 00:11:09.298 07:57:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:09.298 Cannot find device "nvmf_tgt_br2" 00:11:09.298 07:57:14 -- nvmf/common.sh@155 -- # true 00:11:09.298 07:57:14 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:09.298 07:57:14 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:09.298 Cannot find device "nvmf_tgt_br" 00:11:09.298 07:57:14 -- nvmf/common.sh@157 -- # true 00:11:09.298 07:57:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:09.298 Cannot find device "nvmf_tgt_br2" 00:11:09.298 07:57:14 -- nvmf/common.sh@158 -- # true 00:11:09.298 07:57:14 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:09.298 07:57:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:09.298 07:57:14 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:09.298 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.298 07:57:14 -- nvmf/common.sh@161 -- # true 00:11:09.298 07:57:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:09.298 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:09.298 07:57:14 -- nvmf/common.sh@162 -- # true 00:11:09.298 07:57:14 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:09.298 07:57:14 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:09.298 07:57:14 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:09.298 07:57:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:09.298 07:57:14 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:09.298 07:57:15 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:09.299 07:57:15 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:09.299 07:57:15 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:09.299 07:57:15 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:09.299 07:57:15 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:09.299 07:57:15 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:09.299 07:57:15 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:09.299 07:57:15 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:09.299 07:57:15 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:09.299 07:57:15 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:09.299 07:57:15 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:09.299 07:57:15 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:09.299 07:57:15 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:09.299 07:57:15 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:09.299 07:57:15 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:09.299 07:57:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:09.299 07:57:15 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:09.557 07:57:15 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:09.557 07:57:15 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:09.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:11:09.558 00:11:09.558 --- 10.0.0.2 ping statistics --- 00:11:09.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.558 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:09.558 07:57:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:09.558 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:09.558 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.031 ms 00:11:09.558 00:11:09.558 --- 10.0.0.3 ping statistics --- 00:11:09.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.558 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:11:09.558 07:57:15 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:09.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:11:09.558 00:11:09.558 --- 10.0.0.1 ping statistics --- 00:11:09.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.558 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:11:09.558 07:57:15 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.558 07:57:15 -- nvmf/common.sh@421 -- # return 0 00:11:09.558 07:57:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:09.558 07:57:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.558 07:57:15 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:09.558 07:57:15 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:09.558 07:57:15 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.558 07:57:15 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:09.558 07:57:15 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:09.558 07:57:15 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:09.558 07:57:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:09.558 07:57:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:09.558 07:57:15 -- common/autotest_common.sh@10 -- # set +x 00:11:09.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.558 07:57:15 -- nvmf/common.sh@469 -- # nvmfpid=73792 00:11:09.558 07:57:15 -- nvmf/common.sh@470 -- # waitforlisten 73792 00:11:09.558 07:57:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:09.558 07:57:15 -- common/autotest_common.sh@819 -- # '[' -z 73792 ']' 00:11:09.558 07:57:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.558 07:57:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:09.558 07:57:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.558 07:57:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:09.558 07:57:15 -- common/autotest_common.sh@10 -- # set +x 00:11:09.558 [2024-07-13 07:57:15.202878] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:09.558 [2024-07-13 07:57:15.202976] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:09.558 [2024-07-13 07:57:15.344448] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:09.816 [2024-07-13 07:57:15.387073] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:09.816 [2024-07-13 07:57:15.387436] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:09.816 [2024-07-13 07:57:15.387588] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:09.816 [2024-07-13 07:57:15.387737] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:09.816 [2024-07-13 07:57:15.388128] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:09.816 [2024-07-13 07:57:15.388336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:09.816 [2024-07-13 07:57:15.388336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:09.816 [2024-07-13 07:57:15.388267] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:10.751 07:57:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:10.751 07:57:16 -- common/autotest_common.sh@852 -- # return 0 00:11:10.751 07:57:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:10.751 07:57:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:10.751 07:57:16 -- common/autotest_common.sh@10 -- # set +x 00:11:10.751 07:57:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.751 07:57:16 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:10.751 07:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:10.751 07:57:16 -- common/autotest_common.sh@10 -- # set +x 00:11:10.751 [2024-07-13 07:57:16.245164] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:10.751 07:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:10.751 07:57:16 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:10.751 07:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:10.751 07:57:16 -- common/autotest_common.sh@10 -- # set +x 00:11:10.751 Malloc0 00:11:10.751 07:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:10.751 07:57:16 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:10.751 07:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:10.751 07:57:16 -- common/autotest_common.sh@10 -- # set +x 00:11:10.751 07:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:10.751 07:57:16 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:10.751 07:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:10.751 07:57:16 -- common/autotest_common.sh@10 -- # set +x 00:11:10.751 07:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:10.751 07:57:16 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:10.751 07:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:10.751 07:57:16 -- common/autotest_common.sh@10 -- # set +x 00:11:10.751 [2024-07-13 07:57:16.300437] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:10.751 07:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:10.751 07:57:16 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:10.751 07:57:16 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:10.751 07:57:16 -- nvmf/common.sh@520 -- # config=() 00:11:10.751 07:57:16 -- nvmf/common.sh@520 -- # local subsystem config 00:11:10.751 07:57:16 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:10.751 07:57:16 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:10.751 { 00:11:10.751 "params": { 00:11:10.751 "name": "Nvme$subsystem", 00:11:10.751 "trtype": "$TEST_TRANSPORT", 00:11:10.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:10.751 "adrfam": "ipv4", 00:11:10.751 "trsvcid": "$NVMF_PORT", 00:11:10.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:10.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:10.751 "hdgst": ${hdgst:-false}, 00:11:10.751 "ddgst": ${ddgst:-false} 00:11:10.751 }, 00:11:10.751 "method": "bdev_nvme_attach_controller" 00:11:10.751 } 00:11:10.751 EOF 00:11:10.751 )") 00:11:10.751 07:57:16 -- nvmf/common.sh@542 -- # cat 00:11:10.751 07:57:16 -- nvmf/common.sh@544 -- # jq . 00:11:10.751 07:57:16 -- nvmf/common.sh@545 -- # IFS=, 00:11:10.751 07:57:16 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:10.751 "params": { 00:11:10.751 "name": "Nvme1", 00:11:10.751 "trtype": "tcp", 00:11:10.751 "traddr": "10.0.0.2", 00:11:10.751 "adrfam": "ipv4", 00:11:10.751 "trsvcid": "4420", 00:11:10.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:10.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:10.751 "hdgst": false, 00:11:10.751 "ddgst": false 00:11:10.751 }, 00:11:10.751 "method": "bdev_nvme_attach_controller" 00:11:10.751 }' 00:11:10.751 [2024-07-13 07:57:16.360842] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:10.751 [2024-07-13 07:57:16.360941] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73825 ] 00:11:10.751 [2024-07-13 07:57:16.507007] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:10.751 [2024-07-13 07:57:16.548290] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.751 [2024-07-13 07:57:16.548394] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.751 [2024-07-13 07:57:16.548401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:11.009 [2024-07-13 07:57:16.686916] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:11.009 [2024-07-13 07:57:16.686963] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:11.009 I/O targets: 00:11:11.009 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:11.009 00:11:11.009 00:11:11.009 CUnit - A unit testing framework for C - Version 2.1-3 00:11:11.009 http://cunit.sourceforge.net/ 00:11:11.009 00:11:11.009 00:11:11.009 Suite: bdevio tests on: Nvme1n1 00:11:11.009 Test: blockdev write read block ...passed 00:11:11.009 Test: blockdev write zeroes read block ...passed 00:11:11.009 Test: blockdev write zeroes read no split ...passed 00:11:11.009 Test: blockdev write zeroes read split ...passed 00:11:11.009 Test: blockdev write zeroes read split partial ...passed 00:11:11.009 Test: blockdev reset ...[2024-07-13 07:57:16.719717] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:11.009 [2024-07-13 07:57:16.719833] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf37720 (9): Bad file descriptor 00:11:11.009 [2024-07-13 07:57:16.736790] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:11.009 passed 00:11:11.009 Test: blockdev write read 8 blocks ...passed 00:11:11.009 Test: blockdev write read size > 128k ...passed 00:11:11.009 Test: blockdev write read invalid size ...passed 00:11:11.009 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:11.009 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:11.009 Test: blockdev write read max offset ...passed 00:11:11.009 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:11.009 Test: blockdev writev readv 8 blocks ...passed 00:11:11.009 Test: blockdev writev readv 30 x 1block ...passed 00:11:11.009 Test: blockdev writev readv block ...passed 00:11:11.009 Test: blockdev writev readv size > 128k ...passed 00:11:11.009 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:11.009 Test: blockdev comparev and writev ...[2024-07-13 07:57:16.747259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.009 [2024-07-13 07:57:16.747308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:11.009 [2024-07-13 07:57:16.747334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.009 [2024-07-13 07:57:16.747347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:11.009 [2024-07-13 07:57:16.747846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.009 [2024-07-13 07:57:16.747885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:11.009 [2024-07-13 07:57:16.747908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.009 [2024-07-13 07:57:16.747921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:11.009 [2024-07-13 07:57:16.748325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.009 [2024-07-13 07:57:16.748362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:11.009 [2024-07-13 07:57:16.748385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.009 [2024-07-13 07:57:16.748398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:11.009 [2024-07-13 07:57:16.748830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.009 [2024-07-13 07:57:16.748868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:11.009 [2024-07-13 07:57:16.748891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:11.009 [2024-07-13 07:57:16.748904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:11.009 passed 00:11:11.009 Test: blockdev nvme passthru rw ...passed 00:11:11.009 Test: blockdev nvme passthru vendor specific ...[2024-07-13 07:57:16.750479] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:11.009 [2024-07-13 07:57:16.750516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:11.009 [2024-07-13 07:57:16.750854] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:11.009 [2024-07-13 07:57:16.750893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:11.009 [2024-07-13 07:57:16.751161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:11.009 [2024-07-13 07:57:16.751267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:11.009 [2024-07-13 07:57:16.751605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:11.009 [2024-07-13 07:57:16.751641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:11.009 passed 00:11:11.009 Test: blockdev nvme admin passthru ...passed 00:11:11.009 Test: blockdev copy ...passed 00:11:11.009 00:11:11.009 Run Summary: Type Total Ran Passed Failed Inactive 00:11:11.009 suites 1 1 n/a 0 0 00:11:11.009 tests 23 23 23 0 0 00:11:11.009 asserts 152 152 152 0 n/a 00:11:11.009 00:11:11.009 Elapsed time = 0.158 seconds 00:11:11.267 07:57:16 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.267 07:57:16 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:11.267 07:57:16 -- common/autotest_common.sh@10 -- # set +x 00:11:11.267 07:57:16 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:11.267 07:57:16 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:11.267 07:57:16 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:11.267 07:57:16 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:11.267 07:57:16 -- nvmf/common.sh@116 -- # sync 00:11:11.267 07:57:16 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:11.267 07:57:16 -- nvmf/common.sh@119 -- # set +e 00:11:11.267 07:57:16 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:11.267 07:57:16 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:11.267 rmmod nvme_tcp 00:11:11.267 rmmod nvme_fabrics 00:11:11.267 rmmod nvme_keyring 00:11:11.267 07:57:16 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:11.267 07:57:17 -- nvmf/common.sh@123 -- # set -e 00:11:11.267 07:57:17 -- nvmf/common.sh@124 -- # return 0 00:11:11.267 07:57:17 -- nvmf/common.sh@477 -- # '[' -n 73792 ']' 00:11:11.267 07:57:17 -- nvmf/common.sh@478 -- # killprocess 73792 00:11:11.267 07:57:17 -- common/autotest_common.sh@926 -- # '[' -z 73792 ']' 00:11:11.267 07:57:17 -- common/autotest_common.sh@930 -- # kill -0 73792 00:11:11.267 07:57:17 -- common/autotest_common.sh@931 -- # uname 00:11:11.267 07:57:17 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:11.267 07:57:17 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73792 00:11:11.267 07:57:17 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:11:11.267 07:57:17 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:11:11.268 07:57:17 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73792' 00:11:11.268 killing process with pid 73792 00:11:11.268 07:57:17 -- common/autotest_common.sh@945 -- # kill 73792 00:11:11.268 07:57:17 -- common/autotest_common.sh@950 -- # wait 73792 00:11:11.526 07:57:17 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:11.526 07:57:17 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:11.526 07:57:17 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:11.526 07:57:17 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:11.526 07:57:17 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:11.526 07:57:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.526 07:57:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:11.526 07:57:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.526 07:57:17 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:11.526 00:11:11.526 real 0m2.469s 00:11:11.526 user 0m8.353s 00:11:11.526 sys 0m0.604s 00:11:11.526 07:57:17 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:11.526 07:57:17 -- common/autotest_common.sh@10 -- # set +x 00:11:11.526 ************************************ 00:11:11.526 END TEST nvmf_bdevio 00:11:11.526 ************************************ 00:11:11.526 07:57:17 -- nvmf/nvmf.sh@57 -- # '[' tcp = tcp ']' 00:11:11.526 07:57:17 -- nvmf/nvmf.sh@58 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:11.526 07:57:17 -- common/autotest_common.sh@1077 -- # '[' 4 -le 1 ']' 00:11:11.526 07:57:17 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:11.526 07:57:17 -- common/autotest_common.sh@10 -- # set +x 00:11:11.526 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:11:11.526 ************************************ 00:11:11.526 START TEST nvmf_bdevio_no_huge 00:11:11.526 ************************************ 00:11:11.526 07:57:17 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:11:11.785 * Looking for test storage... 00:11:11.785 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:11.785 07:57:17 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:11.785 07:57:17 -- nvmf/common.sh@7 -- # uname -s 00:11:11.785 07:57:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:11.785 07:57:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:11.785 07:57:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:11.785 07:57:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:11.785 07:57:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:11.785 07:57:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:11.785 07:57:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:11.785 07:57:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:11.785 07:57:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:11.785 07:57:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:11.785 07:57:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:11:11.785 07:57:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:11:11.785 07:57:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:11.785 07:57:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:11.785 07:57:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:11.785 07:57:17 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:11.785 07:57:17 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:11.785 07:57:17 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:11.785 07:57:17 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:11.785 07:57:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.785 07:57:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.785 07:57:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.785 07:57:17 -- paths/export.sh@5 -- # export PATH 00:11:11.785 07:57:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:11.785 07:57:17 -- nvmf/common.sh@46 -- # : 0 00:11:11.785 07:57:17 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:11.785 07:57:17 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:11.785 07:57:17 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:11.785 07:57:17 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:11.785 07:57:17 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:11.785 07:57:17 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:11.785 07:57:17 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:11.785 07:57:17 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:11.785 07:57:17 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:11.785 07:57:17 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:11.785 07:57:17 -- target/bdevio.sh@14 -- # nvmftestinit 00:11:11.785 07:57:17 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:11.785 07:57:17 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:11.785 07:57:17 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:11.785 07:57:17 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:11.785 07:57:17 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:11.785 07:57:17 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:11.785 07:57:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:11.785 07:57:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:11.785 07:57:17 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:11.785 07:57:17 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:11.785 07:57:17 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:11.785 07:57:17 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:11.785 07:57:17 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:11.785 07:57:17 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:11.785 07:57:17 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:11.785 07:57:17 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:11.785 07:57:17 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:11.785 07:57:17 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:11.785 07:57:17 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:11.785 07:57:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:11.785 07:57:17 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:11.786 07:57:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:11.786 07:57:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:11.786 07:57:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:11.786 07:57:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:11.786 07:57:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:11.786 07:57:17 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:11.786 07:57:17 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:11.786 Cannot find device "nvmf_tgt_br" 00:11:11.786 07:57:17 -- nvmf/common.sh@154 -- # true 00:11:11.786 07:57:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:11.786 Cannot find device "nvmf_tgt_br2" 00:11:11.786 07:57:17 -- nvmf/common.sh@155 -- # true 00:11:11.786 07:57:17 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:11.786 07:57:17 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:11.786 Cannot find device "nvmf_tgt_br" 00:11:11.786 07:57:17 -- nvmf/common.sh@157 -- # true 00:11:11.786 07:57:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:11.786 Cannot find device "nvmf_tgt_br2" 00:11:11.786 07:57:17 -- nvmf/common.sh@158 -- # true 00:11:11.786 07:57:17 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:11.786 07:57:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:11.786 07:57:17 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:11.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.786 07:57:17 -- nvmf/common.sh@161 -- # true 00:11:11.786 07:57:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:11.786 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:11.786 07:57:17 -- nvmf/common.sh@162 -- # true 00:11:11.786 07:57:17 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:11.786 07:57:17 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:11.786 07:57:17 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:11.786 07:57:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:11.786 07:57:17 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:11.786 07:57:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:11.786 07:57:17 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:11.786 07:57:17 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:11.786 07:57:17 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:11.786 07:57:17 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:11.786 07:57:17 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:11.786 07:57:17 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:12.045 07:57:17 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:12.045 07:57:17 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:12.045 07:57:17 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:12.045 07:57:17 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:12.045 07:57:17 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:12.045 07:57:17 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:12.045 07:57:17 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:12.045 07:57:17 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:12.045 07:57:17 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:12.045 07:57:17 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:12.045 07:57:17 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:12.045 07:57:17 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:12.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:12.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:11:12.045 00:11:12.045 --- 10.0.0.2 ping statistics --- 00:11:12.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.045 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:11:12.045 07:57:17 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:12.045 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:12.045 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.055 ms 00:11:12.045 00:11:12.045 --- 10.0.0.3 ping statistics --- 00:11:12.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.045 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:11:12.045 07:57:17 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:12.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:12.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:12.045 00:11:12.045 --- 10.0.0.1 ping statistics --- 00:11:12.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:12.045 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:12.045 07:57:17 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:12.045 07:57:17 -- nvmf/common.sh@421 -- # return 0 00:11:12.045 07:57:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:12.045 07:57:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:12.045 07:57:17 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:12.045 07:57:17 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:12.045 07:57:17 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:12.045 07:57:17 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:12.045 07:57:17 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:12.045 07:57:17 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:12.045 07:57:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:12.045 07:57:17 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:12.045 07:57:17 -- common/autotest_common.sh@10 -- # set +x 00:11:12.045 07:57:17 -- nvmf/common.sh@469 -- # nvmfpid=73987 00:11:12.045 07:57:17 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:11:12.045 07:57:17 -- nvmf/common.sh@470 -- # waitforlisten 73987 00:11:12.045 07:57:17 -- common/autotest_common.sh@819 -- # '[' -z 73987 ']' 00:11:12.045 07:57:17 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.045 07:57:17 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:12.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.045 07:57:17 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.045 07:57:17 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:12.045 07:57:17 -- common/autotest_common.sh@10 -- # set +x 00:11:12.045 [2024-07-13 07:57:17.762011] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:12.045 [2024-07-13 07:57:17.762082] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:11:12.304 [2024-07-13 07:57:17.898193] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:12.304 [2024-07-13 07:57:17.976182] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:12.304 [2024-07-13 07:57:17.976353] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:12.304 [2024-07-13 07:57:17.976365] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:12.304 [2024-07-13 07:57:17.976373] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:12.304 [2024-07-13 07:57:17.976521] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:11:12.304 [2024-07-13 07:57:17.977028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:11:12.304 [2024-07-13 07:57:17.977165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:11:12.304 [2024-07-13 07:57:17.977170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:13.242 07:57:18 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:13.242 07:57:18 -- common/autotest_common.sh@852 -- # return 0 00:11:13.242 07:57:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:13.242 07:57:18 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:13.242 07:57:18 -- common/autotest_common.sh@10 -- # set +x 00:11:13.242 07:57:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:13.242 07:57:18 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:13.242 07:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:13.242 07:57:18 -- common/autotest_common.sh@10 -- # set +x 00:11:13.242 [2024-07-13 07:57:18.764447] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:13.242 07:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:13.242 07:57:18 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:13.242 07:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:13.242 07:57:18 -- common/autotest_common.sh@10 -- # set +x 00:11:13.242 Malloc0 00:11:13.242 07:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:13.242 07:57:18 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:13.242 07:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:13.242 07:57:18 -- common/autotest_common.sh@10 -- # set +x 00:11:13.242 07:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:13.242 07:57:18 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:13.242 07:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:13.242 07:57:18 -- common/autotest_common.sh@10 -- # set +x 00:11:13.242 07:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:13.242 07:57:18 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:13.242 07:57:18 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:13.242 07:57:18 -- common/autotest_common.sh@10 -- # set +x 00:11:13.242 [2024-07-13 07:57:18.802292] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.242 07:57:18 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:13.242 07:57:18 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:11:13.242 07:57:18 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:13.242 07:57:18 -- nvmf/common.sh@520 -- # config=() 00:11:13.242 07:57:18 -- nvmf/common.sh@520 -- # local subsystem config 00:11:13.242 07:57:18 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:11:13.242 07:57:18 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:11:13.242 { 00:11:13.242 "params": { 00:11:13.242 "name": "Nvme$subsystem", 00:11:13.242 "trtype": "$TEST_TRANSPORT", 00:11:13.242 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:13.242 "adrfam": "ipv4", 00:11:13.242 "trsvcid": "$NVMF_PORT", 00:11:13.242 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:13.242 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:13.242 "hdgst": ${hdgst:-false}, 00:11:13.242 "ddgst": ${ddgst:-false} 00:11:13.242 }, 00:11:13.242 "method": "bdev_nvme_attach_controller" 00:11:13.242 } 00:11:13.242 EOF 00:11:13.242 )") 00:11:13.242 07:57:18 -- nvmf/common.sh@542 -- # cat 00:11:13.242 07:57:18 -- nvmf/common.sh@544 -- # jq . 00:11:13.242 07:57:18 -- nvmf/common.sh@545 -- # IFS=, 00:11:13.242 07:57:18 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:11:13.242 "params": { 00:11:13.242 "name": "Nvme1", 00:11:13.242 "trtype": "tcp", 00:11:13.242 "traddr": "10.0.0.2", 00:11:13.242 "adrfam": "ipv4", 00:11:13.242 "trsvcid": "4420", 00:11:13.242 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:13.242 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:13.242 "hdgst": false, 00:11:13.242 "ddgst": false 00:11:13.242 }, 00:11:13.242 "method": "bdev_nvme_attach_controller" 00:11:13.242 }' 00:11:13.242 [2024-07-13 07:57:18.852163] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:13.243 [2024-07-13 07:57:18.852667] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid74017 ] 00:11:13.243 [2024-07-13 07:57:18.994281] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:13.501 [2024-07-13 07:57:19.075829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.501 [2024-07-13 07:57:19.075948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:13.501 [2024-07-13 07:57:19.076220] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.501 [2024-07-13 07:57:19.216134] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:11:13.502 [2024-07-13 07:57:19.216414] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:11:13.502 I/O targets: 00:11:13.502 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:13.502 00:11:13.502 00:11:13.502 CUnit - A unit testing framework for C - Version 2.1-3 00:11:13.502 http://cunit.sourceforge.net/ 00:11:13.502 00:11:13.502 00:11:13.502 Suite: bdevio tests on: Nvme1n1 00:11:13.502 Test: blockdev write read block ...passed 00:11:13.502 Test: blockdev write zeroes read block ...passed 00:11:13.502 Test: blockdev write zeroes read no split ...passed 00:11:13.502 Test: blockdev write zeroes read split ...passed 00:11:13.502 Test: blockdev write zeroes read split partial ...passed 00:11:13.502 Test: blockdev reset ...[2024-07-13 07:57:19.255074] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:13.502 [2024-07-13 07:57:19.255348] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1eb4590 (9): Bad file descriptor 00:11:13.502 [2024-07-13 07:57:19.275082] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:13.502 passed 00:11:13.502 Test: blockdev write read 8 blocks ...passed 00:11:13.502 Test: blockdev write read size > 128k ...passed 00:11:13.502 Test: blockdev write read invalid size ...passed 00:11:13.502 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:13.502 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:13.502 Test: blockdev write read max offset ...passed 00:11:13.502 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:13.502 Test: blockdev writev readv 8 blocks ...passed 00:11:13.502 Test: blockdev writev readv 30 x 1block ...passed 00:11:13.502 Test: blockdev writev readv block ...passed 00:11:13.502 Test: blockdev writev readv size > 128k ...passed 00:11:13.502 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:13.502 Test: blockdev comparev and writev ...[2024-07-13 07:57:19.286499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:13.502 [2024-07-13 07:57:19.286563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:13.502 [2024-07-13 07:57:19.286600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:13.502 [2024-07-13 07:57:19.286613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:13.502 [2024-07-13 07:57:19.287085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:13.502 [2024-07-13 07:57:19.287127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:13.502 [2024-07-13 07:57:19.287150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:13.502 [2024-07-13 07:57:19.287163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:13.502 [2024-07-13 07:57:19.287645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:13.502 [2024-07-13 07:57:19.287684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:13.502 [2024-07-13 07:57:19.287707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:13.502 [2024-07-13 07:57:19.287720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:13.502 [2024-07-13 07:57:19.288139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:13.502 [2024-07-13 07:57:19.288178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:13.502 [2024-07-13 07:57:19.288201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:13.502 [2024-07-13 07:57:19.288214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:13.502 passed 00:11:13.502 Test: blockdev nvme passthru rw ...passed 00:11:13.502 Test: blockdev nvme passthru vendor specific ...[2024-07-13 07:57:19.289637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:13.502 [2024-07-13 07:57:19.289942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:13.502 [2024-07-13 07:57:19.290268] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:13.502 [2024-07-13 07:57:19.290307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:13.502 [2024-07-13 07:57:19.290515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:13.502 [2024-07-13 07:57:19.290915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:13.502 [2024-07-13 07:57:19.291236] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:13.502 [2024-07-13 07:57:19.291274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:13.502 passed 00:11:13.502 Test: blockdev nvme admin passthru ...passed 00:11:13.502 Test: blockdev copy ...passed 00:11:13.502 00:11:13.502 Run Summary: Type Total Ran Passed Failed Inactive 00:11:13.502 suites 1 1 n/a 0 0 00:11:13.502 tests 23 23 23 0 0 00:11:13.502 asserts 152 152 152 0 n/a 00:11:13.502 00:11:13.502 Elapsed time = 0.174 seconds 00:11:14.070 07:57:19 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.070 07:57:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:11:14.070 07:57:19 -- common/autotest_common.sh@10 -- # set +x 00:11:14.070 07:57:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:11:14.070 07:57:19 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:14.070 07:57:19 -- target/bdevio.sh@30 -- # nvmftestfini 00:11:14.070 07:57:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:11:14.070 07:57:19 -- nvmf/common.sh@116 -- # sync 00:11:14.070 07:57:19 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:11:14.070 07:57:19 -- nvmf/common.sh@119 -- # set +e 00:11:14.070 07:57:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:11:14.070 07:57:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:11:14.070 rmmod nvme_tcp 00:11:14.070 rmmod nvme_fabrics 00:11:14.070 rmmod nvme_keyring 00:11:14.070 07:57:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:11:14.070 07:57:19 -- nvmf/common.sh@123 -- # set -e 00:11:14.070 07:57:19 -- nvmf/common.sh@124 -- # return 0 00:11:14.070 07:57:19 -- nvmf/common.sh@477 -- # '[' -n 73987 ']' 00:11:14.070 07:57:19 -- nvmf/common.sh@478 -- # killprocess 73987 00:11:14.070 07:57:19 -- common/autotest_common.sh@926 -- # '[' -z 73987 ']' 00:11:14.070 07:57:19 -- common/autotest_common.sh@930 -- # kill -0 73987 00:11:14.070 07:57:19 -- common/autotest_common.sh@931 -- # uname 00:11:14.070 07:57:19 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:14.070 07:57:19 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 73987 00:11:14.070 07:57:19 -- common/autotest_common.sh@932 -- # process_name=reactor_3 00:11:14.070 07:57:19 -- common/autotest_common.sh@936 -- # '[' reactor_3 = sudo ']' 00:11:14.070 07:57:19 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 73987' 00:11:14.070 killing process with pid 73987 00:11:14.070 07:57:19 -- common/autotest_common.sh@945 -- # kill 73987 00:11:14.070 07:57:19 -- common/autotest_common.sh@950 -- # wait 73987 00:11:14.328 07:57:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:11:14.328 07:57:20 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:11:14.328 07:57:20 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:11:14.328 07:57:20 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:14.328 07:57:20 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:11:14.328 07:57:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.328 07:57:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.328 07:57:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.328 07:57:20 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:11:14.328 00:11:14.328 real 0m2.810s 00:11:14.328 user 0m9.233s 00:11:14.328 sys 0m1.063s 00:11:14.328 07:57:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:14.328 07:57:20 -- common/autotest_common.sh@10 -- # set +x 00:11:14.328 ************************************ 00:11:14.328 END TEST nvmf_bdevio_no_huge 00:11:14.328 ************************************ 00:11:14.328 07:57:20 -- nvmf/nvmf.sh@59 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:14.328 07:57:20 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:11:14.328 07:57:20 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:11:14.328 07:57:20 -- common/autotest_common.sh@10 -- # set +x 00:11:14.328 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:11:14.328 ************************************ 00:11:14.328 START TEST nvmf_tls 00:11:14.328 ************************************ 00:11:14.328 07:57:20 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:11:14.588 * Looking for test storage... 00:11:14.588 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:14.588 07:57:20 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:14.588 07:57:20 -- nvmf/common.sh@7 -- # uname -s 00:11:14.588 07:57:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:14.588 07:57:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:14.588 07:57:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:14.588 07:57:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:14.588 07:57:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:14.588 07:57:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:14.588 07:57:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:14.588 07:57:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:14.588 07:57:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:14.588 07:57:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:14.588 07:57:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:11:14.588 07:57:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:11:14.588 07:57:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:14.588 07:57:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:14.588 07:57:20 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:14.588 07:57:20 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:14.588 07:57:20 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:14.588 07:57:20 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:14.588 07:57:20 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:14.588 07:57:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.588 07:57:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.588 07:57:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.588 07:57:20 -- paths/export.sh@5 -- # export PATH 00:11:14.588 07:57:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:14.588 07:57:20 -- nvmf/common.sh@46 -- # : 0 00:11:14.588 07:57:20 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:11:14.588 07:57:20 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:11:14.588 07:57:20 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:11:14.588 07:57:20 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:14.588 07:57:20 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:14.588 07:57:20 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:11:14.588 07:57:20 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:11:14.588 07:57:20 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:11:14.588 07:57:20 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:14.588 07:57:20 -- target/tls.sh@71 -- # nvmftestinit 00:11:14.588 07:57:20 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:11:14.588 07:57:20 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:14.588 07:57:20 -- nvmf/common.sh@436 -- # prepare_net_devs 00:11:14.588 07:57:20 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:11:14.588 07:57:20 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:11:14.588 07:57:20 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.588 07:57:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.588 07:57:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:14.589 07:57:20 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:11:14.589 07:57:20 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:11:14.589 07:57:20 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:11:14.589 07:57:20 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:11:14.589 07:57:20 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:11:14.589 07:57:20 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:11:14.589 07:57:20 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:14.589 07:57:20 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:14.589 07:57:20 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:11:14.589 07:57:20 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:11:14.589 07:57:20 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:14.589 07:57:20 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:14.589 07:57:20 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:14.589 07:57:20 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:14.589 07:57:20 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:14.589 07:57:20 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:14.589 07:57:20 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:14.589 07:57:20 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:14.589 07:57:20 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:11:14.589 07:57:20 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:11:14.589 Cannot find device "nvmf_tgt_br" 00:11:14.589 07:57:20 -- nvmf/common.sh@154 -- # true 00:11:14.589 07:57:20 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:11:14.589 Cannot find device "nvmf_tgt_br2" 00:11:14.589 07:57:20 -- nvmf/common.sh@155 -- # true 00:11:14.589 07:57:20 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:11:14.589 07:57:20 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:11:14.589 Cannot find device "nvmf_tgt_br" 00:11:14.589 07:57:20 -- nvmf/common.sh@157 -- # true 00:11:14.589 07:57:20 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:11:14.589 Cannot find device "nvmf_tgt_br2" 00:11:14.589 07:57:20 -- nvmf/common.sh@158 -- # true 00:11:14.589 07:57:20 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:11:14.589 07:57:20 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:11:14.589 07:57:20 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:14.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:14.589 07:57:20 -- nvmf/common.sh@161 -- # true 00:11:14.589 07:57:20 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:14.589 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:14.589 07:57:20 -- nvmf/common.sh@162 -- # true 00:11:14.589 07:57:20 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:11:14.589 07:57:20 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:14.589 07:57:20 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:14.849 07:57:20 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:14.849 07:57:20 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:14.849 07:57:20 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:14.849 07:57:20 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:14.849 07:57:20 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:11:14.849 07:57:20 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:11:14.849 07:57:20 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:11:14.849 07:57:20 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:11:14.849 07:57:20 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:11:14.849 07:57:20 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:11:14.849 07:57:20 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:14.849 07:57:20 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:14.849 07:57:20 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:14.849 07:57:20 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:11:14.849 07:57:20 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:11:14.849 07:57:20 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:11:14.849 07:57:20 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:14.849 07:57:20 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:14.849 07:57:20 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:14.849 07:57:20 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:14.849 07:57:20 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:11:14.849 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:14.849 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:11:14.849 00:11:14.849 --- 10.0.0.2 ping statistics --- 00:11:14.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.849 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:11:14.849 07:57:20 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:11:14.849 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:14.849 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.051 ms 00:11:14.849 00:11:14.849 --- 10.0.0.3 ping statistics --- 00:11:14.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.849 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:14.849 07:57:20 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:14.849 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:14.849 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:11:14.849 00:11:14.849 --- 10.0.0.1 ping statistics --- 00:11:14.849 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:14.849 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:11:14.849 07:57:20 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:14.849 07:57:20 -- nvmf/common.sh@421 -- # return 0 00:11:14.849 07:57:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:11:14.849 07:57:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:14.849 07:57:20 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:11:14.849 07:57:20 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:11:14.849 07:57:20 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:14.849 07:57:20 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:11:14.849 07:57:20 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:11:14.849 07:57:20 -- target/tls.sh@72 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:11:14.849 07:57:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:14.849 07:57:20 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:14.849 07:57:20 -- common/autotest_common.sh@10 -- # set +x 00:11:14.849 07:57:20 -- nvmf/common.sh@469 -- # nvmfpid=74183 00:11:14.849 07:57:20 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:11:14.849 07:57:20 -- nvmf/common.sh@470 -- # waitforlisten 74183 00:11:14.849 07:57:20 -- common/autotest_common.sh@819 -- # '[' -z 74183 ']' 00:11:14.849 07:57:20 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:14.849 07:57:20 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:14.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:14.849 07:57:20 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:14.849 07:57:20 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:14.849 07:57:20 -- common/autotest_common.sh@10 -- # set +x 00:11:15.108 [2024-07-13 07:57:20.675981] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:15.108 [2024-07-13 07:57:20.676062] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.108 [2024-07-13 07:57:20.817608] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.108 [2024-07-13 07:57:20.853451] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:15.108 [2024-07-13 07:57:20.853597] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.108 [2024-07-13 07:57:20.853613] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.108 [2024-07-13 07:57:20.853622] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.108 [2024-07-13 07:57:20.853652] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.108 07:57:20 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:15.108 07:57:20 -- common/autotest_common.sh@852 -- # return 0 00:11:15.108 07:57:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:15.108 07:57:20 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:15.108 07:57:20 -- common/autotest_common.sh@10 -- # set +x 00:11:15.366 07:57:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.366 07:57:20 -- target/tls.sh@74 -- # '[' tcp '!=' tcp ']' 00:11:15.366 07:57:20 -- target/tls.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:11:15.624 true 00:11:15.624 07:57:21 -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:15.624 07:57:21 -- target/tls.sh@82 -- # jq -r .tls_version 00:11:15.881 07:57:21 -- target/tls.sh@82 -- # version=0 00:11:15.881 07:57:21 -- target/tls.sh@83 -- # [[ 0 != \0 ]] 00:11:15.881 07:57:21 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:16.138 07:57:21 -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:16.138 07:57:21 -- target/tls.sh@90 -- # jq -r .tls_version 00:11:16.396 07:57:21 -- target/tls.sh@90 -- # version=13 00:11:16.396 07:57:21 -- target/tls.sh@91 -- # [[ 13 != \1\3 ]] 00:11:16.396 07:57:21 -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:11:16.654 07:57:22 -- target/tls.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:16.654 07:57:22 -- target/tls.sh@98 -- # jq -r .tls_version 00:11:16.912 07:57:22 -- target/tls.sh@98 -- # version=7 00:11:16.912 07:57:22 -- target/tls.sh@99 -- # [[ 7 != \7 ]] 00:11:16.912 07:57:22 -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:16.912 07:57:22 -- target/tls.sh@105 -- # jq -r .enable_ktls 00:11:17.170 07:57:22 -- target/tls.sh@105 -- # ktls=false 00:11:17.170 07:57:22 -- target/tls.sh@106 -- # [[ false != \f\a\l\s\e ]] 00:11:17.170 07:57:22 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:11:17.428 07:57:23 -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:17.428 07:57:23 -- target/tls.sh@113 -- # jq -r .enable_ktls 00:11:17.686 07:57:23 -- target/tls.sh@113 -- # ktls=true 00:11:17.686 07:57:23 -- target/tls.sh@114 -- # [[ true != \t\r\u\e ]] 00:11:17.686 07:57:23 -- target/tls.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:11:17.686 07:57:23 -- target/tls.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:11:17.686 07:57:23 -- target/tls.sh@121 -- # jq -r .enable_ktls 00:11:17.944 07:57:23 -- target/tls.sh@121 -- # ktls=false 00:11:17.944 07:57:23 -- target/tls.sh@122 -- # [[ false != \f\a\l\s\e ]] 00:11:17.944 07:57:23 -- target/tls.sh@127 -- # format_interchange_psk 00112233445566778899aabbccddeeff 00:11:17.944 07:57:23 -- target/tls.sh@49 -- # local key hash crc 00:11:17.944 07:57:23 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff 00:11:17.944 07:57:23 -- target/tls.sh@51 -- # hash=01 00:11:17.944 07:57:23 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff 00:11:17.944 07:57:23 -- target/tls.sh@52 -- # gzip -1 -c 00:11:17.944 07:57:23 -- target/tls.sh@52 -- # tail -c8 00:11:17.944 07:57:23 -- target/tls.sh@52 -- # head -c 4 00:11:17.944 07:57:23 -- target/tls.sh@52 -- # crc='p$H�' 00:11:17.944 07:57:23 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:17.944 07:57:23 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeffp$H�' 00:11:17.944 07:57:23 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:17.944 07:57:23 -- target/tls.sh@127 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:17.944 07:57:23 -- target/tls.sh@128 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 00:11:17.944 07:57:23 -- target/tls.sh@49 -- # local key hash crc 00:11:17.944 07:57:23 -- target/tls.sh@51 -- # key=ffeeddccbbaa99887766554433221100 00:11:17.944 07:57:23 -- target/tls.sh@51 -- # hash=01 00:11:17.944 07:57:23 -- target/tls.sh@52 -- # echo -n ffeeddccbbaa99887766554433221100 00:11:17.944 07:57:23 -- target/tls.sh@52 -- # gzip -1 -c 00:11:17.944 07:57:23 -- target/tls.sh@52 -- # tail -c8 00:11:17.944 07:57:23 -- target/tls.sh@52 -- # head -c 4 00:11:18.202 07:57:23 -- target/tls.sh@52 -- # crc=$'_\006o\330' 00:11:18.202 07:57:23 -- target/tls.sh@54 -- # echo -n $'ffeeddccbbaa99887766554433221100_\006o\330' 00:11:18.202 07:57:23 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:18.202 07:57:23 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:18.202 07:57:23 -- target/tls.sh@128 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:18.202 07:57:23 -- target/tls.sh@130 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:18.202 07:57:23 -- target/tls.sh@131 -- # key_2_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:18.202 07:57:23 -- target/tls.sh@133 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:11:18.202 07:57:23 -- target/tls.sh@134 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:11:18.202 07:57:23 -- target/tls.sh@136 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:18.202 07:57:23 -- target/tls.sh@137 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:18.202 07:57:23 -- target/tls.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:11:18.460 07:57:24 -- target/tls.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:11:18.718 07:57:24 -- target/tls.sh@142 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:18.718 07:57:24 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:18.718 07:57:24 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:18.976 [2024-07-13 07:57:24.561181] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:18.976 07:57:24 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:19.235 07:57:24 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:19.235 [2024-07-13 07:57:24.989320] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:19.235 [2024-07-13 07:57:24.989470] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:19.235 07:57:25 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:19.494 malloc0 00:11:19.494 07:57:25 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:19.752 07:57:25 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:20.010 07:57:25 -- target/tls.sh@146 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:29.984 Initializing NVMe Controllers 00:11:29.984 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:29.984 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:29.984 Initialization complete. Launching workers. 00:11:29.984 ======================================================== 00:11:29.984 Latency(us) 00:11:29.984 Device Information : IOPS MiB/s Average min max 00:11:29.984 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11229.88 43.87 5700.02 1494.91 8092.43 00:11:29.984 ======================================================== 00:11:29.984 Total : 11229.88 43.87 5700.02 1494.91 8092.43 00:11:29.984 00:11:29.984 07:57:35 -- target/tls.sh@152 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:29.984 07:57:35 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:29.984 07:57:35 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:29.984 07:57:35 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:29.984 07:57:35 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:29.984 07:57:35 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:29.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:29.984 07:57:35 -- target/tls.sh@28 -- # bdevperf_pid=74327 00:11:29.984 07:57:35 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:29.984 07:57:35 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:29.984 07:57:35 -- target/tls.sh@31 -- # waitforlisten 74327 /var/tmp/bdevperf.sock 00:11:29.984 07:57:35 -- common/autotest_common.sh@819 -- # '[' -z 74327 ']' 00:11:29.984 07:57:35 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:29.984 07:57:35 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:29.984 07:57:35 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:29.984 07:57:35 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:29.984 07:57:35 -- common/autotest_common.sh@10 -- # set +x 00:11:30.243 [2024-07-13 07:57:35.808934] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:30.243 [2024-07-13 07:57:35.809032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74327 ] 00:11:30.243 [2024-07-13 07:57:35.949297] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.243 [2024-07-13 07:57:35.988334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.179 07:57:36 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:31.179 07:57:36 -- common/autotest_common.sh@852 -- # return 0 00:11:31.179 07:57:36 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:31.179 [2024-07-13 07:57:36.936983] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:31.438 TLSTESTn1 00:11:31.438 07:57:37 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:31.438 Running I/O for 10 seconds... 00:11:41.446 00:11:41.446 Latency(us) 00:11:41.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.446 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:11:41.446 Verification LBA range: start 0x0 length 0x2000 00:11:41.446 TLSTESTn1 : 10.02 6068.91 23.71 0.00 0.00 21056.27 5213.09 22758.87 00:11:41.446 =================================================================================================================== 00:11:41.446 Total : 6068.91 23.71 0.00 0.00 21056.27 5213.09 22758.87 00:11:41.446 0 00:11:41.446 07:57:47 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:41.446 07:57:47 -- target/tls.sh@45 -- # killprocess 74327 00:11:41.446 07:57:47 -- common/autotest_common.sh@926 -- # '[' -z 74327 ']' 00:11:41.446 07:57:47 -- common/autotest_common.sh@930 -- # kill -0 74327 00:11:41.446 07:57:47 -- common/autotest_common.sh@931 -- # uname 00:11:41.446 07:57:47 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:41.446 07:57:47 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74327 00:11:41.446 killing process with pid 74327 00:11:41.446 Received shutdown signal, test time was about 10.000000 seconds 00:11:41.446 00:11:41.446 Latency(us) 00:11:41.446 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.446 =================================================================================================================== 00:11:41.446 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:41.446 07:57:47 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:41.446 07:57:47 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:41.446 07:57:47 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74327' 00:11:41.446 07:57:47 -- common/autotest_common.sh@945 -- # kill 74327 00:11:41.446 07:57:47 -- common/autotest_common.sh@950 -- # wait 74327 00:11:41.706 07:57:47 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:41.706 07:57:47 -- common/autotest_common.sh@640 -- # local es=0 00:11:41.706 07:57:47 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:41.706 07:57:47 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:11:41.706 07:57:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:41.706 07:57:47 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:11:41.706 07:57:47 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:41.706 07:57:47 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:41.706 07:57:47 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:41.706 07:57:47 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:41.706 07:57:47 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:41.706 07:57:47 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt' 00:11:41.706 07:57:47 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:41.706 07:57:47 -- target/tls.sh@28 -- # bdevperf_pid=74393 00:11:41.706 07:57:47 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:41.706 07:57:47 -- target/tls.sh@31 -- # waitforlisten 74393 /var/tmp/bdevperf.sock 00:11:41.706 07:57:47 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:41.706 07:57:47 -- common/autotest_common.sh@819 -- # '[' -z 74393 ']' 00:11:41.706 07:57:47 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:41.706 07:57:47 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:41.706 07:57:47 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:41.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:41.706 07:57:47 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:41.706 07:57:47 -- common/autotest_common.sh@10 -- # set +x 00:11:41.706 [2024-07-13 07:57:47.397598] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:41.706 [2024-07-13 07:57:47.397899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74393 ] 00:11:41.965 [2024-07-13 07:57:47.534486] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.965 [2024-07-13 07:57:47.565837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:42.530 07:57:48 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:42.530 07:57:48 -- common/autotest_common.sh@852 -- # return 0 00:11:42.530 07:57:48 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt 00:11:42.789 [2024-07-13 07:57:48.515707] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:42.789 [2024-07-13 07:57:48.527893] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:42.789 [2024-07-13 07:57:48.528527] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129a190 (107): Transport endpoint is not connected 00:11:42.789 [2024-07-13 07:57:48.529521] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x129a190 (9): Bad file descriptor 00:11:42.789 [2024-07-13 07:57:48.530517] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:42.789 [2024-07-13 07:57:48.530544] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:42.789 [2024-07-13 07:57:48.530570] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:42.789 request: 00:11:42.789 { 00:11:42.789 "name": "TLSTEST", 00:11:42.789 "trtype": "tcp", 00:11:42.789 "traddr": "10.0.0.2", 00:11:42.789 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:42.789 "adrfam": "ipv4", 00:11:42.789 "trsvcid": "4420", 00:11:42.789 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:42.789 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt", 00:11:42.789 "method": "bdev_nvme_attach_controller", 00:11:42.789 "req_id": 1 00:11:42.789 } 00:11:42.789 Got JSON-RPC error response 00:11:42.789 response: 00:11:42.789 { 00:11:42.789 "code": -32602, 00:11:42.789 "message": "Invalid parameters" 00:11:42.789 } 00:11:42.789 07:57:48 -- target/tls.sh@36 -- # killprocess 74393 00:11:42.789 07:57:48 -- common/autotest_common.sh@926 -- # '[' -z 74393 ']' 00:11:42.789 07:57:48 -- common/autotest_common.sh@930 -- # kill -0 74393 00:11:42.789 07:57:48 -- common/autotest_common.sh@931 -- # uname 00:11:42.789 07:57:48 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:42.789 07:57:48 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74393 00:11:42.789 killing process with pid 74393 00:11:42.789 Received shutdown signal, test time was about 10.000000 seconds 00:11:42.789 00:11:42.789 Latency(us) 00:11:42.789 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:42.789 =================================================================================================================== 00:11:42.789 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:42.789 07:57:48 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:42.789 07:57:48 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:42.789 07:57:48 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74393' 00:11:42.789 07:57:48 -- common/autotest_common.sh@945 -- # kill 74393 00:11:42.789 07:57:48 -- common/autotest_common.sh@950 -- # wait 74393 00:11:43.048 07:57:48 -- target/tls.sh@37 -- # return 1 00:11:43.048 07:57:48 -- common/autotest_common.sh@643 -- # es=1 00:11:43.048 07:57:48 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:43.048 07:57:48 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:43.048 07:57:48 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:43.048 07:57:48 -- target/tls.sh@158 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:43.048 07:57:48 -- common/autotest_common.sh@640 -- # local es=0 00:11:43.048 07:57:48 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:43.049 07:57:48 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:11:43.049 07:57:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:43.049 07:57:48 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:11:43.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:43.049 07:57:48 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:43.049 07:57:48 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:43.049 07:57:48 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:43.049 07:57:48 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:43.049 07:57:48 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:11:43.049 07:57:48 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:43.049 07:57:48 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:43.049 07:57:48 -- target/tls.sh@28 -- # bdevperf_pid=74410 00:11:43.049 07:57:48 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:43.049 07:57:48 -- target/tls.sh@31 -- # waitforlisten 74410 /var/tmp/bdevperf.sock 00:11:43.049 07:57:48 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:43.049 07:57:48 -- common/autotest_common.sh@819 -- # '[' -z 74410 ']' 00:11:43.049 07:57:48 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:43.049 07:57:48 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:43.049 07:57:48 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:43.049 07:57:48 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:43.049 07:57:48 -- common/autotest_common.sh@10 -- # set +x 00:11:43.049 [2024-07-13 07:57:48.749830] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:43.049 [2024-07-13 07:57:48.750085] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74410 ] 00:11:43.307 [2024-07-13 07:57:48.882566] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.307 [2024-07-13 07:57:48.914404] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.242 07:57:49 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:44.242 07:57:49 -- common/autotest_common.sh@852 -- # return 0 00:11:44.242 07:57:49 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:44.242 [2024-07-13 07:57:49.947666] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:44.242 [2024-07-13 07:57:49.958086] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:44.242 [2024-07-13 07:57:49.958441] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:11:44.242 [2024-07-13 07:57:49.958652] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:44.242 [2024-07-13 07:57:49.959086] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a9190 (107): Transport endpoint is not connected 00:11:44.242 [2024-07-13 07:57:49.959998] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a9190 (9): Bad file descriptor 00:11:44.242 [2024-07-13 07:57:49.960995] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:44.242 [2024-07-13 07:57:49.961039] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:44.242 [2024-07-13 07:57:49.961051] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:44.242 request: 00:11:44.242 { 00:11:44.242 "name": "TLSTEST", 00:11:44.242 "trtype": "tcp", 00:11:44.242 "traddr": "10.0.0.2", 00:11:44.242 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:11:44.242 "adrfam": "ipv4", 00:11:44.242 "trsvcid": "4420", 00:11:44.242 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:44.242 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:11:44.242 "method": "bdev_nvme_attach_controller", 00:11:44.242 "req_id": 1 00:11:44.242 } 00:11:44.242 Got JSON-RPC error response 00:11:44.242 response: 00:11:44.242 { 00:11:44.242 "code": -32602, 00:11:44.242 "message": "Invalid parameters" 00:11:44.243 } 00:11:44.243 07:57:49 -- target/tls.sh@36 -- # killprocess 74410 00:11:44.243 07:57:49 -- common/autotest_common.sh@926 -- # '[' -z 74410 ']' 00:11:44.243 07:57:49 -- common/autotest_common.sh@930 -- # kill -0 74410 00:11:44.243 07:57:49 -- common/autotest_common.sh@931 -- # uname 00:11:44.243 07:57:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:44.243 07:57:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74410 00:11:44.243 killing process with pid 74410 00:11:44.243 Received shutdown signal, test time was about 10.000000 seconds 00:11:44.243 00:11:44.243 Latency(us) 00:11:44.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:44.243 =================================================================================================================== 00:11:44.243 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:44.243 07:57:50 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:44.243 07:57:50 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:44.243 07:57:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74410' 00:11:44.243 07:57:50 -- common/autotest_common.sh@945 -- # kill 74410 00:11:44.243 07:57:50 -- common/autotest_common.sh@950 -- # wait 74410 00:11:44.501 07:57:50 -- target/tls.sh@37 -- # return 1 00:11:44.501 07:57:50 -- common/autotest_common.sh@643 -- # es=1 00:11:44.502 07:57:50 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:44.502 07:57:50 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:44.502 07:57:50 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:44.502 07:57:50 -- target/tls.sh@161 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:44.502 07:57:50 -- common/autotest_common.sh@640 -- # local es=0 00:11:44.502 07:57:50 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:44.502 07:57:50 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:11:44.502 07:57:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:44.502 07:57:50 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:11:44.502 07:57:50 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:44.502 07:57:50 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:44.502 07:57:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:44.502 07:57:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:11:44.502 07:57:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:44.502 07:57:50 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt' 00:11:44.502 07:57:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:44.502 07:57:50 -- target/tls.sh@28 -- # bdevperf_pid=74426 00:11:44.502 07:57:50 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:44.502 07:57:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:44.502 07:57:50 -- target/tls.sh@31 -- # waitforlisten 74426 /var/tmp/bdevperf.sock 00:11:44.502 07:57:50 -- common/autotest_common.sh@819 -- # '[' -z 74426 ']' 00:11:44.502 07:57:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:44.502 07:57:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:44.502 07:57:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:44.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:44.502 07:57:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:44.502 07:57:50 -- common/autotest_common.sh@10 -- # set +x 00:11:44.502 [2024-07-13 07:57:50.181919] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:44.502 [2024-07-13 07:57:50.182193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74426 ] 00:11:44.502 [2024-07-13 07:57:50.315508] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:44.761 [2024-07-13 07:57:50.349457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:45.327 07:57:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:45.327 07:57:51 -- common/autotest_common.sh@852 -- # return 0 00:11:45.327 07:57:51 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt 00:11:45.586 [2024-07-13 07:57:51.260434] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:45.586 [2024-07-13 07:57:51.268099] tcp.c: 866:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:45.586 [2024-07-13 07:57:51.268135] posix.c: 583:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:11:45.586 [2024-07-13 07:57:51.268198] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:45.586 [2024-07-13 07:57:51.269141] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ad190 (107): Transport endpoint is not connected 00:11:45.586 [2024-07-13 07:57:51.270133] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10ad190 (9): Bad file descriptor 00:11:45.586 [2024-07-13 07:57:51.271129] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:11:45.586 [2024-07-13 07:57:51.271171] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:45.586 [2024-07-13 07:57:51.271197] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:11:45.586 request: 00:11:45.586 { 00:11:45.586 "name": "TLSTEST", 00:11:45.586 "trtype": "tcp", 00:11:45.586 "traddr": "10.0.0.2", 00:11:45.586 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:45.586 "adrfam": "ipv4", 00:11:45.586 "trsvcid": "4420", 00:11:45.586 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:11:45.586 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt", 00:11:45.586 "method": "bdev_nvme_attach_controller", 00:11:45.586 "req_id": 1 00:11:45.586 } 00:11:45.586 Got JSON-RPC error response 00:11:45.586 response: 00:11:45.586 { 00:11:45.586 "code": -32602, 00:11:45.586 "message": "Invalid parameters" 00:11:45.586 } 00:11:45.586 07:57:51 -- target/tls.sh@36 -- # killprocess 74426 00:11:45.586 07:57:51 -- common/autotest_common.sh@926 -- # '[' -z 74426 ']' 00:11:45.586 07:57:51 -- common/autotest_common.sh@930 -- # kill -0 74426 00:11:45.586 07:57:51 -- common/autotest_common.sh@931 -- # uname 00:11:45.586 07:57:51 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:45.586 07:57:51 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74426 00:11:45.586 killing process with pid 74426 00:11:45.586 Received shutdown signal, test time was about 10.000000 seconds 00:11:45.586 00:11:45.586 Latency(us) 00:11:45.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:45.586 =================================================================================================================== 00:11:45.586 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:45.586 07:57:51 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:45.586 07:57:51 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:45.586 07:57:51 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74426' 00:11:45.586 07:57:51 -- common/autotest_common.sh@945 -- # kill 74426 00:11:45.586 07:57:51 -- common/autotest_common.sh@950 -- # wait 74426 00:11:45.845 07:57:51 -- target/tls.sh@37 -- # return 1 00:11:45.845 07:57:51 -- common/autotest_common.sh@643 -- # es=1 00:11:45.845 07:57:51 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:45.845 07:57:51 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:45.845 07:57:51 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:45.845 07:57:51 -- target/tls.sh@164 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:45.845 07:57:51 -- common/autotest_common.sh@640 -- # local es=0 00:11:45.845 07:57:51 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:45.845 07:57:51 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:11:45.845 07:57:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:45.845 07:57:51 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:11:45.845 07:57:51 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:11:45.845 07:57:51 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:11:45.845 07:57:51 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:45.845 07:57:51 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:45.845 07:57:51 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:45.845 07:57:51 -- target/tls.sh@23 -- # psk= 00:11:45.845 07:57:51 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:45.845 07:57:51 -- target/tls.sh@28 -- # bdevperf_pid=74442 00:11:45.845 07:57:51 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:45.845 07:57:51 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:45.845 07:57:51 -- target/tls.sh@31 -- # waitforlisten 74442 /var/tmp/bdevperf.sock 00:11:45.845 07:57:51 -- common/autotest_common.sh@819 -- # '[' -z 74442 ']' 00:11:45.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:45.845 07:57:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:45.845 07:57:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:45.845 07:57:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:45.845 07:57:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:45.845 07:57:51 -- common/autotest_common.sh@10 -- # set +x 00:11:45.845 [2024-07-13 07:57:51.483917] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:45.845 [2024-07-13 07:57:51.484003] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74442 ] 00:11:45.845 [2024-07-13 07:57:51.615542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.845 [2024-07-13 07:57:51.646458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:46.780 07:57:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:46.780 07:57:52 -- common/autotest_common.sh@852 -- # return 0 00:11:46.780 07:57:52 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:11:47.039 [2024-07-13 07:57:52.618398] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:11:47.039 [2024-07-13 07:57:52.619880] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd4ea20 (9): Bad file descriptor 00:11:47.039 [2024-07-13 07:57:52.620875] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:11:47.039 [2024-07-13 07:57:52.621286] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:11:47.039 [2024-07-13 07:57:52.621480] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:11:47.039 request: 00:11:47.039 { 00:11:47.039 "name": "TLSTEST", 00:11:47.039 "trtype": "tcp", 00:11:47.039 "traddr": "10.0.0.2", 00:11:47.039 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:47.039 "adrfam": "ipv4", 00:11:47.039 "trsvcid": "4420", 00:11:47.039 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:47.039 "method": "bdev_nvme_attach_controller", 00:11:47.039 "req_id": 1 00:11:47.039 } 00:11:47.039 Got JSON-RPC error response 00:11:47.039 response: 00:11:47.039 { 00:11:47.039 "code": -32602, 00:11:47.039 "message": "Invalid parameters" 00:11:47.039 } 00:11:47.039 07:57:52 -- target/tls.sh@36 -- # killprocess 74442 00:11:47.039 07:57:52 -- common/autotest_common.sh@926 -- # '[' -z 74442 ']' 00:11:47.039 07:57:52 -- common/autotest_common.sh@930 -- # kill -0 74442 00:11:47.039 07:57:52 -- common/autotest_common.sh@931 -- # uname 00:11:47.039 07:57:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:47.039 07:57:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74442 00:11:47.039 killing process with pid 74442 00:11:47.039 Received shutdown signal, test time was about 10.000000 seconds 00:11:47.039 00:11:47.039 Latency(us) 00:11:47.039 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:47.039 =================================================================================================================== 00:11:47.039 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:11:47.039 07:57:52 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:11:47.039 07:57:52 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:11:47.039 07:57:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74442' 00:11:47.039 07:57:52 -- common/autotest_common.sh@945 -- # kill 74442 00:11:47.039 07:57:52 -- common/autotest_common.sh@950 -- # wait 74442 00:11:47.039 07:57:52 -- target/tls.sh@37 -- # return 1 00:11:47.039 07:57:52 -- common/autotest_common.sh@643 -- # es=1 00:11:47.039 07:57:52 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:11:47.039 07:57:52 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:11:47.039 07:57:52 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:11:47.039 07:57:52 -- target/tls.sh@167 -- # killprocess 74183 00:11:47.039 07:57:52 -- common/autotest_common.sh@926 -- # '[' -z 74183 ']' 00:11:47.039 07:57:52 -- common/autotest_common.sh@930 -- # kill -0 74183 00:11:47.039 07:57:52 -- common/autotest_common.sh@931 -- # uname 00:11:47.039 07:57:52 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:11:47.039 07:57:52 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74183 00:11:47.039 killing process with pid 74183 00:11:47.039 07:57:52 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:11:47.039 07:57:52 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:11:47.039 07:57:52 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74183' 00:11:47.039 07:57:52 -- common/autotest_common.sh@945 -- # kill 74183 00:11:47.039 07:57:52 -- common/autotest_common.sh@950 -- # wait 74183 00:11:47.297 07:57:52 -- target/tls.sh@168 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 02 00:11:47.297 07:57:52 -- target/tls.sh@49 -- # local key hash crc 00:11:47.297 07:57:52 -- target/tls.sh@51 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:11:47.297 07:57:52 -- target/tls.sh@51 -- # hash=02 00:11:47.297 07:57:52 -- target/tls.sh@52 -- # echo -n 00112233445566778899aabbccddeeff0011223344556677 00:11:47.297 07:57:52 -- target/tls.sh@52 -- # gzip -1 -c 00:11:47.297 07:57:52 -- target/tls.sh@52 -- # head -c 4 00:11:47.297 07:57:52 -- target/tls.sh@52 -- # tail -c8 00:11:47.297 07:57:52 -- target/tls.sh@52 -- # crc='�e�'\''' 00:11:47.297 07:57:52 -- target/tls.sh@54 -- # base64 /dev/fd/62 00:11:47.297 07:57:52 -- target/tls.sh@54 -- # echo -n '00112233445566778899aabbccddeeff0011223344556677�e�'\''' 00:11:47.297 07:57:52 -- target/tls.sh@54 -- # echo NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:47.297 07:57:52 -- target/tls.sh@168 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:47.297 07:57:52 -- target/tls.sh@169 -- # key_long_path=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:47.297 07:57:52 -- target/tls.sh@170 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:11:47.297 07:57:52 -- target/tls.sh@171 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:47.297 07:57:52 -- target/tls.sh@172 -- # nvmfappstart -m 0x2 00:11:47.297 07:57:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:11:47.297 07:57:52 -- common/autotest_common.sh@712 -- # xtrace_disable 00:11:47.297 07:57:52 -- common/autotest_common.sh@10 -- # set +x 00:11:47.297 07:57:52 -- nvmf/common.sh@469 -- # nvmfpid=74478 00:11:47.297 07:57:52 -- nvmf/common.sh@470 -- # waitforlisten 74478 00:11:47.297 07:57:52 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:47.297 07:57:52 -- common/autotest_common.sh@819 -- # '[' -z 74478 ']' 00:11:47.297 07:57:52 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.297 07:57:52 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:47.297 07:57:52 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.297 07:57:52 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:47.297 07:57:52 -- common/autotest_common.sh@10 -- # set +x 00:11:47.297 [2024-07-13 07:57:53.026876] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:47.298 [2024-07-13 07:57:53.026963] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.556 [2024-07-13 07:57:53.159548] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.556 [2024-07-13 07:57:53.189959] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:11:47.556 [2024-07-13 07:57:53.190410] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.556 [2024-07-13 07:57:53.190512] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.556 [2024-07-13 07:57:53.190628] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.556 [2024-07-13 07:57:53.190762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.123 07:57:53 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:48.123 07:57:53 -- common/autotest_common.sh@852 -- # return 0 00:11:48.123 07:57:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:11:48.123 07:57:53 -- common/autotest_common.sh@718 -- # xtrace_disable 00:11:48.123 07:57:53 -- common/autotest_common.sh@10 -- # set +x 00:11:48.123 07:57:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.123 07:57:53 -- target/tls.sh@174 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:48.123 07:57:53 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:48.123 07:57:53 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:11:48.382 [2024-07-13 07:57:54.175479] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.382 07:57:54 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:11:48.641 07:57:54 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:11:48.899 [2024-07-13 07:57:54.663593] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:11:48.899 [2024-07-13 07:57:54.664019] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.899 07:57:54 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:11:49.158 malloc0 00:11:49.158 07:57:54 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:49.416 07:57:55 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:49.675 07:57:55 -- target/tls.sh@176 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:49.675 07:57:55 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:11:49.675 07:57:55 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:11:49.675 07:57:55 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:11:49.675 07:57:55 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:11:49.675 07:57:55 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:49.675 07:57:55 -- target/tls.sh@28 -- # bdevperf_pid=74515 00:11:49.675 07:57:55 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:11:49.675 07:57:55 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:11:49.675 07:57:55 -- target/tls.sh@31 -- # waitforlisten 74515 /var/tmp/bdevperf.sock 00:11:49.675 07:57:55 -- common/autotest_common.sh@819 -- # '[' -z 74515 ']' 00:11:49.675 07:57:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:49.675 07:57:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:11:49.675 07:57:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:49.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:49.675 07:57:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:11:49.675 07:57:55 -- common/autotest_common.sh@10 -- # set +x 00:11:49.675 [2024-07-13 07:57:55.364546] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:11:49.675 [2024-07-13 07:57:55.365021] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74515 ] 00:11:49.935 [2024-07-13 07:57:55.495416] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:49.935 [2024-07-13 07:57:55.527871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:50.506 07:57:56 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:11:50.506 07:57:56 -- common/autotest_common.sh@852 -- # return 0 00:11:50.506 07:57:56 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:11:50.765 [2024-07-13 07:57:56.538414] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:11:51.023 TLSTESTn1 00:11:51.023 07:57:56 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:11:51.023 Running I/O for 10 seconds... 00:12:00.996 00:12:00.996 Latency(us) 00:12:00.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.996 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:00.996 Verification LBA range: start 0x0 length 0x2000 00:12:00.996 TLSTESTn1 : 10.01 6254.57 24.43 0.00 0.00 20432.67 4349.21 27405.96 00:12:00.996 =================================================================================================================== 00:12:00.996 Total : 6254.57 24.43 0.00 0.00 20432.67 4349.21 27405.96 00:12:00.996 0 00:12:00.996 07:58:06 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:00.996 07:58:06 -- target/tls.sh@45 -- # killprocess 74515 00:12:00.996 07:58:06 -- common/autotest_common.sh@926 -- # '[' -z 74515 ']' 00:12:00.996 07:58:06 -- common/autotest_common.sh@930 -- # kill -0 74515 00:12:00.996 07:58:06 -- common/autotest_common.sh@931 -- # uname 00:12:00.996 07:58:06 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:00.996 07:58:06 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74515 00:12:00.996 killing process with pid 74515 00:12:00.996 Received shutdown signal, test time was about 10.000000 seconds 00:12:00.996 00:12:00.996 Latency(us) 00:12:00.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:00.996 =================================================================================================================== 00:12:00.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:00.996 07:58:06 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:00.996 07:58:06 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:00.996 07:58:06 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74515' 00:12:00.996 07:58:06 -- common/autotest_common.sh@945 -- # kill 74515 00:12:00.996 07:58:06 -- common/autotest_common.sh@950 -- # wait 74515 00:12:01.256 07:58:06 -- target/tls.sh@179 -- # chmod 0666 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:01.256 07:58:06 -- target/tls.sh@180 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:01.256 07:58:06 -- common/autotest_common.sh@640 -- # local es=0 00:12:01.256 07:58:06 -- common/autotest_common.sh@642 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:01.256 07:58:06 -- common/autotest_common.sh@628 -- # local arg=run_bdevperf 00:12:01.256 07:58:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:01.256 07:58:06 -- common/autotest_common.sh@632 -- # type -t run_bdevperf 00:12:01.256 07:58:06 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:01.256 07:58:06 -- common/autotest_common.sh@643 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:01.256 07:58:06 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:12:01.256 07:58:06 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:12:01.256 07:58:06 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:12:01.256 07:58:06 -- target/tls.sh@23 -- # psk='--psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt' 00:12:01.256 07:58:06 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:01.256 07:58:06 -- target/tls.sh@28 -- # bdevperf_pid=74580 00:12:01.256 07:58:06 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:01.256 07:58:06 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:01.256 07:58:06 -- target/tls.sh@31 -- # waitforlisten 74580 /var/tmp/bdevperf.sock 00:12:01.256 07:58:06 -- common/autotest_common.sh@819 -- # '[' -z 74580 ']' 00:12:01.256 07:58:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:01.256 07:58:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:01.256 07:58:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:01.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:01.256 07:58:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:01.256 07:58:06 -- common/autotest_common.sh@10 -- # set +x 00:12:01.256 [2024-07-13 07:58:06.996985] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:01.256 [2024-07-13 07:58:06.997276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74580 ] 00:12:01.515 [2024-07-13 07:58:07.136499] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.515 [2024-07-13 07:58:07.171033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:02.451 07:58:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:02.451 07:58:07 -- common/autotest_common.sh@852 -- # return 0 00:12:02.452 07:58:07 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:02.452 [2024-07-13 07:58:08.156230] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:02.452 [2024-07-13 07:58:08.156309] bdev_nvme_rpc.c: 336:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:02.452 request: 00:12:02.452 { 00:12:02.452 "name": "TLSTEST", 00:12:02.452 "trtype": "tcp", 00:12:02.452 "traddr": "10.0.0.2", 00:12:02.452 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:02.452 "adrfam": "ipv4", 00:12:02.452 "trsvcid": "4420", 00:12:02.452 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:02.452 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:02.452 "method": "bdev_nvme_attach_controller", 00:12:02.452 "req_id": 1 00:12:02.452 } 00:12:02.452 Got JSON-RPC error response 00:12:02.452 response: 00:12:02.452 { 00:12:02.452 "code": -22, 00:12:02.452 "message": "Could not retrieve PSK from file: /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:02.452 } 00:12:02.452 07:58:08 -- target/tls.sh@36 -- # killprocess 74580 00:12:02.452 07:58:08 -- common/autotest_common.sh@926 -- # '[' -z 74580 ']' 00:12:02.452 07:58:08 -- common/autotest_common.sh@930 -- # kill -0 74580 00:12:02.452 07:58:08 -- common/autotest_common.sh@931 -- # uname 00:12:02.452 07:58:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:02.452 07:58:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74580 00:12:02.452 07:58:08 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:02.452 07:58:08 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:02.452 07:58:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74580' 00:12:02.452 killing process with pid 74580 00:12:02.452 07:58:08 -- common/autotest_common.sh@945 -- # kill 74580 00:12:02.452 Received shutdown signal, test time was about 10.000000 seconds 00:12:02.452 00:12:02.452 Latency(us) 00:12:02.452 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:02.452 =================================================================================================================== 00:12:02.452 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:02.452 07:58:08 -- common/autotest_common.sh@950 -- # wait 74580 00:12:02.711 07:58:08 -- target/tls.sh@37 -- # return 1 00:12:02.711 07:58:08 -- common/autotest_common.sh@643 -- # es=1 00:12:02.711 07:58:08 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:02.711 07:58:08 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:02.711 07:58:08 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:02.711 07:58:08 -- target/tls.sh@183 -- # killprocess 74478 00:12:02.711 07:58:08 -- common/autotest_common.sh@926 -- # '[' -z 74478 ']' 00:12:02.711 07:58:08 -- common/autotest_common.sh@930 -- # kill -0 74478 00:12:02.711 07:58:08 -- common/autotest_common.sh@931 -- # uname 00:12:02.711 07:58:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:02.711 07:58:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74478 00:12:02.711 killing process with pid 74478 00:12:02.711 07:58:08 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:02.711 07:58:08 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:02.711 07:58:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74478' 00:12:02.711 07:58:08 -- common/autotest_common.sh@945 -- # kill 74478 00:12:02.711 07:58:08 -- common/autotest_common.sh@950 -- # wait 74478 00:12:02.711 07:58:08 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:12:02.711 07:58:08 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:02.711 07:58:08 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:02.711 07:58:08 -- common/autotest_common.sh@10 -- # set +x 00:12:02.711 07:58:08 -- nvmf/common.sh@469 -- # nvmfpid=74606 00:12:02.711 07:58:08 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:02.711 07:58:08 -- nvmf/common.sh@470 -- # waitforlisten 74606 00:12:02.711 07:58:08 -- common/autotest_common.sh@819 -- # '[' -z 74606 ']' 00:12:02.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:02.711 07:58:08 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:02.711 07:58:08 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:02.711 07:58:08 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:02.711 07:58:08 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:02.711 07:58:08 -- common/autotest_common.sh@10 -- # set +x 00:12:02.970 [2024-07-13 07:58:08.539365] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:02.970 [2024-07-13 07:58:08.539445] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:02.970 [2024-07-13 07:58:08.670918] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:02.970 [2024-07-13 07:58:08.701325] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:02.970 [2024-07-13 07:58:08.701468] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:02.970 [2024-07-13 07:58:08.701482] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:02.970 [2024-07-13 07:58:08.701490] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:02.970 [2024-07-13 07:58:08.701516] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:03.907 07:58:09 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:03.907 07:58:09 -- common/autotest_common.sh@852 -- # return 0 00:12:03.907 07:58:09 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:03.907 07:58:09 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:03.907 07:58:09 -- common/autotest_common.sh@10 -- # set +x 00:12:03.907 07:58:09 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:03.907 07:58:09 -- target/tls.sh@186 -- # NOT setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:03.907 07:58:09 -- common/autotest_common.sh@640 -- # local es=0 00:12:03.907 07:58:09 -- common/autotest_common.sh@642 -- # valid_exec_arg setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:03.907 07:58:09 -- common/autotest_common.sh@628 -- # local arg=setup_nvmf_tgt 00:12:03.907 07:58:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:03.907 07:58:09 -- common/autotest_common.sh@632 -- # type -t setup_nvmf_tgt 00:12:03.907 07:58:09 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:03.907 07:58:09 -- common/autotest_common.sh@643 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:03.907 07:58:09 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:03.907 07:58:09 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:04.166 [2024-07-13 07:58:09.773855] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.166 07:58:09 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:04.426 07:58:10 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:04.426 [2024-07-13 07:58:10.229991] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:04.426 [2024-07-13 07:58:10.230210] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.684 07:58:10 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:04.684 malloc0 00:12:04.684 07:58:10 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:04.942 07:58:10 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:05.201 [2024-07-13 07:58:10.844060] tcp.c:3549:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:12:05.201 [2024-07-13 07:58:10.844094] tcp.c:3618:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:12:05.201 [2024-07-13 07:58:10.844127] subsystem.c: 880:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:12:05.201 request: 00:12:05.201 { 00:12:05.201 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:05.201 "host": "nqn.2016-06.io.spdk:host1", 00:12:05.201 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:05.201 "method": "nvmf_subsystem_add_host", 00:12:05.201 "req_id": 1 00:12:05.201 } 00:12:05.201 Got JSON-RPC error response 00:12:05.201 response: 00:12:05.201 { 00:12:05.201 "code": -32603, 00:12:05.201 "message": "Internal error" 00:12:05.201 } 00:12:05.201 07:58:10 -- common/autotest_common.sh@643 -- # es=1 00:12:05.201 07:58:10 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:05.201 07:58:10 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:05.201 07:58:10 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:05.201 07:58:10 -- target/tls.sh@189 -- # killprocess 74606 00:12:05.201 07:58:10 -- common/autotest_common.sh@926 -- # '[' -z 74606 ']' 00:12:05.201 07:58:10 -- common/autotest_common.sh@930 -- # kill -0 74606 00:12:05.201 07:58:10 -- common/autotest_common.sh@931 -- # uname 00:12:05.201 07:58:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:05.201 07:58:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74606 00:12:05.202 07:58:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:05.202 07:58:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:05.202 07:58:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74606' 00:12:05.202 killing process with pid 74606 00:12:05.202 07:58:10 -- common/autotest_common.sh@945 -- # kill 74606 00:12:05.202 07:58:10 -- common/autotest_common.sh@950 -- # wait 74606 00:12:05.461 07:58:11 -- target/tls.sh@190 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:05.461 07:58:11 -- target/tls.sh@193 -- # nvmfappstart -m 0x2 00:12:05.461 07:58:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:05.461 07:58:11 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:05.461 07:58:11 -- common/autotest_common.sh@10 -- # set +x 00:12:05.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.461 07:58:11 -- nvmf/common.sh@469 -- # nvmfpid=74651 00:12:05.461 07:58:11 -- nvmf/common.sh@470 -- # waitforlisten 74651 00:12:05.461 07:58:11 -- common/autotest_common.sh@819 -- # '[' -z 74651 ']' 00:12:05.461 07:58:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.461 07:58:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:05.461 07:58:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:05.461 07:58:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.461 07:58:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:05.461 07:58:11 -- common/autotest_common.sh@10 -- # set +x 00:12:05.461 [2024-07-13 07:58:11.081849] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:05.461 [2024-07-13 07:58:11.081936] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.461 [2024-07-13 07:58:11.218405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.461 [2024-07-13 07:58:11.248293] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:05.461 [2024-07-13 07:58:11.248454] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.461 [2024-07-13 07:58:11.248467] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.461 [2024-07-13 07:58:11.248474] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.461 [2024-07-13 07:58:11.248496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.397 07:58:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:06.397 07:58:11 -- common/autotest_common.sh@852 -- # return 0 00:12:06.397 07:58:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:06.397 07:58:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:06.397 07:58:11 -- common/autotest_common.sh@10 -- # set +x 00:12:06.397 07:58:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.397 07:58:12 -- target/tls.sh@194 -- # setup_nvmf_tgt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:06.397 07:58:12 -- target/tls.sh@58 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:06.397 07:58:12 -- target/tls.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:12:06.655 [2024-07-13 07:58:12.244586] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.655 07:58:12 -- target/tls.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:12:06.914 07:58:12 -- target/tls.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:12:06.914 [2024-07-13 07:58:12.688672] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:06.914 [2024-07-13 07:58:12.688890] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.914 07:58:12 -- target/tls.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:12:07.175 malloc0 00:12:07.176 07:58:12 -- target/tls.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:07.435 07:58:13 -- target/tls.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:07.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:07.693 07:58:13 -- target/tls.sh@197 -- # bdevperf_pid=74688 00:12:07.693 07:58:13 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:07.693 07:58:13 -- target/tls.sh@199 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:12:07.693 07:58:13 -- target/tls.sh@200 -- # waitforlisten 74688 /var/tmp/bdevperf.sock 00:12:07.693 07:58:13 -- common/autotest_common.sh@819 -- # '[' -z 74688 ']' 00:12:07.693 07:58:13 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:07.693 07:58:13 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:07.693 07:58:13 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:07.693 07:58:13 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:07.693 07:58:13 -- common/autotest_common.sh@10 -- # set +x 00:12:07.693 [2024-07-13 07:58:13.388942] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:07.693 [2024-07-13 07:58:13.389466] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74688 ] 00:12:07.952 [2024-07-13 07:58:13.519196] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.952 [2024-07-13 07:58:13.552746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.885 07:58:14 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:08.885 07:58:14 -- common/autotest_common.sh@852 -- # return 0 00:12:08.885 07:58:14 -- target/tls.sh@201 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:08.885 [2024-07-13 07:58:14.523568] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:08.885 TLSTESTn1 00:12:08.885 07:58:14 -- target/tls.sh@205 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:12:09.145 07:58:14 -- target/tls.sh@205 -- # tgtconf='{ 00:12:09.145 "subsystems": [ 00:12:09.145 { 00:12:09.145 "subsystem": "iobuf", 00:12:09.145 "config": [ 00:12:09.145 { 00:12:09.145 "method": "iobuf_set_options", 00:12:09.145 "params": { 00:12:09.145 "small_pool_count": 8192, 00:12:09.145 "large_pool_count": 1024, 00:12:09.145 "small_bufsize": 8192, 00:12:09.145 "large_bufsize": 135168 00:12:09.145 } 00:12:09.145 } 00:12:09.145 ] 00:12:09.145 }, 00:12:09.145 { 00:12:09.145 "subsystem": "sock", 00:12:09.145 "config": [ 00:12:09.145 { 00:12:09.145 "method": "sock_impl_set_options", 00:12:09.145 "params": { 00:12:09.145 "impl_name": "uring", 00:12:09.145 "recv_buf_size": 2097152, 00:12:09.145 "send_buf_size": 2097152, 00:12:09.145 "enable_recv_pipe": true, 00:12:09.145 "enable_quickack": false, 00:12:09.145 "enable_placement_id": 0, 00:12:09.145 "enable_zerocopy_send_server": false, 00:12:09.145 "enable_zerocopy_send_client": false, 00:12:09.145 "zerocopy_threshold": 0, 00:12:09.145 "tls_version": 0, 00:12:09.145 "enable_ktls": false 00:12:09.145 } 00:12:09.145 }, 00:12:09.145 { 00:12:09.145 "method": "sock_impl_set_options", 00:12:09.145 "params": { 00:12:09.145 "impl_name": "posix", 00:12:09.145 "recv_buf_size": 2097152, 00:12:09.145 "send_buf_size": 2097152, 00:12:09.145 "enable_recv_pipe": true, 00:12:09.145 "enable_quickack": false, 00:12:09.145 "enable_placement_id": 0, 00:12:09.145 "enable_zerocopy_send_server": true, 00:12:09.145 "enable_zerocopy_send_client": false, 00:12:09.145 "zerocopy_threshold": 0, 00:12:09.145 "tls_version": 0, 00:12:09.145 "enable_ktls": false 00:12:09.145 } 00:12:09.145 }, 00:12:09.145 { 00:12:09.145 "method": "sock_impl_set_options", 00:12:09.145 "params": { 00:12:09.145 "impl_name": "ssl", 00:12:09.145 "recv_buf_size": 4096, 00:12:09.145 "send_buf_size": 4096, 00:12:09.145 "enable_recv_pipe": true, 00:12:09.145 "enable_quickack": false, 00:12:09.145 "enable_placement_id": 0, 00:12:09.145 "enable_zerocopy_send_server": true, 00:12:09.145 "enable_zerocopy_send_client": false, 00:12:09.145 "zerocopy_threshold": 0, 00:12:09.145 "tls_version": 0, 00:12:09.145 "enable_ktls": false 00:12:09.145 } 00:12:09.145 } 00:12:09.145 ] 00:12:09.145 }, 00:12:09.145 { 00:12:09.145 "subsystem": "vmd", 00:12:09.145 "config": [] 00:12:09.145 }, 00:12:09.145 { 00:12:09.145 "subsystem": "accel", 00:12:09.145 "config": [ 00:12:09.145 { 00:12:09.145 "method": "accel_set_options", 00:12:09.145 "params": { 00:12:09.145 "small_cache_size": 128, 00:12:09.145 "large_cache_size": 16, 00:12:09.145 "task_count": 2048, 00:12:09.145 "sequence_count": 2048, 00:12:09.145 "buf_count": 2048 00:12:09.145 } 00:12:09.145 } 00:12:09.145 ] 00:12:09.145 }, 00:12:09.145 { 00:12:09.145 "subsystem": "bdev", 00:12:09.145 "config": [ 00:12:09.145 { 00:12:09.145 "method": "bdev_set_options", 00:12:09.145 "params": { 00:12:09.145 "bdev_io_pool_size": 65535, 00:12:09.145 "bdev_io_cache_size": 256, 00:12:09.145 "bdev_auto_examine": true, 00:12:09.145 "iobuf_small_cache_size": 128, 00:12:09.145 "iobuf_large_cache_size": 16 00:12:09.145 } 00:12:09.145 }, 00:12:09.145 { 00:12:09.145 "method": "bdev_raid_set_options", 00:12:09.145 "params": { 00:12:09.145 "process_window_size_kb": 1024 00:12:09.145 } 00:12:09.145 }, 00:12:09.145 { 00:12:09.145 "method": "bdev_iscsi_set_options", 00:12:09.145 "params": { 00:12:09.145 "timeout_sec": 30 00:12:09.145 } 00:12:09.145 }, 00:12:09.145 { 00:12:09.145 "method": "bdev_nvme_set_options", 00:12:09.145 "params": { 00:12:09.145 "action_on_timeout": "none", 00:12:09.145 "timeout_us": 0, 00:12:09.145 "timeout_admin_us": 0, 00:12:09.145 "keep_alive_timeout_ms": 10000, 00:12:09.145 "transport_retry_count": 4, 00:12:09.145 "arbitration_burst": 0, 00:12:09.145 "low_priority_weight": 0, 00:12:09.145 "medium_priority_weight": 0, 00:12:09.145 "high_priority_weight": 0, 00:12:09.145 "nvme_adminq_poll_period_us": 10000, 00:12:09.145 "nvme_ioq_poll_period_us": 0, 00:12:09.145 "io_queue_requests": 0, 00:12:09.145 "delay_cmd_submit": true, 00:12:09.145 "bdev_retry_count": 3, 00:12:09.145 "transport_ack_timeout": 0, 00:12:09.145 "ctrlr_loss_timeout_sec": 0, 00:12:09.145 "reconnect_delay_sec": 0, 00:12:09.145 "fast_io_fail_timeout_sec": 0, 00:12:09.145 "generate_uuids": false, 00:12:09.145 "transport_tos": 0, 00:12:09.145 "io_path_stat": false, 00:12:09.145 "allow_accel_sequence": false 00:12:09.145 } 00:12:09.145 }, 00:12:09.145 { 00:12:09.145 "method": "bdev_nvme_set_hotplug", 00:12:09.145 "params": { 00:12:09.145 "period_us": 100000, 00:12:09.145 "enable": false 00:12:09.145 } 00:12:09.145 }, 00:12:09.145 { 00:12:09.146 "method": "bdev_malloc_create", 00:12:09.146 "params": { 00:12:09.146 "name": "malloc0", 00:12:09.146 "num_blocks": 8192, 00:12:09.146 "block_size": 4096, 00:12:09.146 "physical_block_size": 4096, 00:12:09.146 "uuid": "96e6f326-643a-45ee-b8be-d82980c63a4e", 00:12:09.146 "optimal_io_boundary": 0 00:12:09.146 } 00:12:09.146 }, 00:12:09.146 { 00:12:09.146 "method": "bdev_wait_for_examine" 00:12:09.146 } 00:12:09.146 ] 00:12:09.146 }, 00:12:09.146 { 00:12:09.146 "subsystem": "nbd", 00:12:09.146 "config": [] 00:12:09.146 }, 00:12:09.146 { 00:12:09.146 "subsystem": "scheduler", 00:12:09.146 "config": [ 00:12:09.146 { 00:12:09.146 "method": "framework_set_scheduler", 00:12:09.146 "params": { 00:12:09.146 "name": "static" 00:12:09.146 } 00:12:09.146 } 00:12:09.146 ] 00:12:09.146 }, 00:12:09.146 { 00:12:09.146 "subsystem": "nvmf", 00:12:09.146 "config": [ 00:12:09.146 { 00:12:09.146 "method": "nvmf_set_config", 00:12:09.146 "params": { 00:12:09.146 "discovery_filter": "match_any", 00:12:09.146 "admin_cmd_passthru": { 00:12:09.146 "identify_ctrlr": false 00:12:09.146 } 00:12:09.146 } 00:12:09.146 }, 00:12:09.146 { 00:12:09.146 "method": "nvmf_set_max_subsystems", 00:12:09.146 "params": { 00:12:09.146 "max_subsystems": 1024 00:12:09.146 } 00:12:09.146 }, 00:12:09.146 { 00:12:09.146 "method": "nvmf_set_crdt", 00:12:09.146 "params": { 00:12:09.146 "crdt1": 0, 00:12:09.146 "crdt2": 0, 00:12:09.146 "crdt3": 0 00:12:09.146 } 00:12:09.146 }, 00:12:09.146 { 00:12:09.146 "method": "nvmf_create_transport", 00:12:09.146 "params": { 00:12:09.146 "trtype": "TCP", 00:12:09.146 "max_queue_depth": 128, 00:12:09.146 "max_io_qpairs_per_ctrlr": 127, 00:12:09.146 "in_capsule_data_size": 4096, 00:12:09.146 "max_io_size": 131072, 00:12:09.146 "io_unit_size": 131072, 00:12:09.146 "max_aq_depth": 128, 00:12:09.146 "num_shared_buffers": 511, 00:12:09.146 "buf_cache_size": 4294967295, 00:12:09.146 "dif_insert_or_strip": false, 00:12:09.146 "zcopy": false, 00:12:09.146 "c2h_success": false, 00:12:09.146 "sock_priority": 0, 00:12:09.146 "abort_timeout_sec": 1 00:12:09.146 } 00:12:09.146 }, 00:12:09.146 { 00:12:09.146 "method": "nvmf_create_subsystem", 00:12:09.146 "params": { 00:12:09.146 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.146 "allow_any_host": false, 00:12:09.146 "serial_number": "SPDK00000000000001", 00:12:09.146 "model_number": "SPDK bdev Controller", 00:12:09.146 "max_namespaces": 10, 00:12:09.146 "min_cntlid": 1, 00:12:09.146 "max_cntlid": 65519, 00:12:09.146 "ana_reporting": false 00:12:09.146 } 00:12:09.146 }, 00:12:09.146 { 00:12:09.146 "method": "nvmf_subsystem_add_host", 00:12:09.146 "params": { 00:12:09.146 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.146 "host": "nqn.2016-06.io.spdk:host1", 00:12:09.146 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:09.146 } 00:12:09.146 }, 00:12:09.146 { 00:12:09.146 "method": "nvmf_subsystem_add_ns", 00:12:09.146 "params": { 00:12:09.146 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.146 "namespace": { 00:12:09.146 "nsid": 1, 00:12:09.146 "bdev_name": "malloc0", 00:12:09.146 "nguid": "96E6F326643A45EEB8BED82980C63A4E", 00:12:09.146 "uuid": "96e6f326-643a-45ee-b8be-d82980c63a4e" 00:12:09.146 } 00:12:09.146 } 00:12:09.146 }, 00:12:09.146 { 00:12:09.146 "method": "nvmf_subsystem_add_listener", 00:12:09.146 "params": { 00:12:09.146 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.146 "listen_address": { 00:12:09.146 "trtype": "TCP", 00:12:09.146 "adrfam": "IPv4", 00:12:09.146 "traddr": "10.0.0.2", 00:12:09.146 "trsvcid": "4420" 00:12:09.146 }, 00:12:09.146 "secure_channel": true 00:12:09.146 } 00:12:09.146 } 00:12:09.146 ] 00:12:09.146 } 00:12:09.146 ] 00:12:09.146 }' 00:12:09.146 07:58:14 -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:12:09.405 07:58:15 -- target/tls.sh@206 -- # bdevperfconf='{ 00:12:09.405 "subsystems": [ 00:12:09.405 { 00:12:09.405 "subsystem": "iobuf", 00:12:09.406 "config": [ 00:12:09.406 { 00:12:09.406 "method": "iobuf_set_options", 00:12:09.406 "params": { 00:12:09.406 "small_pool_count": 8192, 00:12:09.406 "large_pool_count": 1024, 00:12:09.406 "small_bufsize": 8192, 00:12:09.406 "large_bufsize": 135168 00:12:09.406 } 00:12:09.406 } 00:12:09.406 ] 00:12:09.406 }, 00:12:09.406 { 00:12:09.406 "subsystem": "sock", 00:12:09.406 "config": [ 00:12:09.406 { 00:12:09.406 "method": "sock_impl_set_options", 00:12:09.406 "params": { 00:12:09.406 "impl_name": "uring", 00:12:09.406 "recv_buf_size": 2097152, 00:12:09.406 "send_buf_size": 2097152, 00:12:09.406 "enable_recv_pipe": true, 00:12:09.406 "enable_quickack": false, 00:12:09.406 "enable_placement_id": 0, 00:12:09.406 "enable_zerocopy_send_server": false, 00:12:09.406 "enable_zerocopy_send_client": false, 00:12:09.406 "zerocopy_threshold": 0, 00:12:09.406 "tls_version": 0, 00:12:09.406 "enable_ktls": false 00:12:09.406 } 00:12:09.406 }, 00:12:09.406 { 00:12:09.406 "method": "sock_impl_set_options", 00:12:09.406 "params": { 00:12:09.406 "impl_name": "posix", 00:12:09.406 "recv_buf_size": 2097152, 00:12:09.406 "send_buf_size": 2097152, 00:12:09.406 "enable_recv_pipe": true, 00:12:09.406 "enable_quickack": false, 00:12:09.406 "enable_placement_id": 0, 00:12:09.406 "enable_zerocopy_send_server": true, 00:12:09.406 "enable_zerocopy_send_client": false, 00:12:09.406 "zerocopy_threshold": 0, 00:12:09.406 "tls_version": 0, 00:12:09.406 "enable_ktls": false 00:12:09.406 } 00:12:09.406 }, 00:12:09.406 { 00:12:09.406 "method": "sock_impl_set_options", 00:12:09.406 "params": { 00:12:09.406 "impl_name": "ssl", 00:12:09.406 "recv_buf_size": 4096, 00:12:09.406 "send_buf_size": 4096, 00:12:09.406 "enable_recv_pipe": true, 00:12:09.406 "enable_quickack": false, 00:12:09.406 "enable_placement_id": 0, 00:12:09.406 "enable_zerocopy_send_server": true, 00:12:09.406 "enable_zerocopy_send_client": false, 00:12:09.406 "zerocopy_threshold": 0, 00:12:09.406 "tls_version": 0, 00:12:09.406 "enable_ktls": false 00:12:09.406 } 00:12:09.406 } 00:12:09.406 ] 00:12:09.406 }, 00:12:09.406 { 00:12:09.406 "subsystem": "vmd", 00:12:09.406 "config": [] 00:12:09.406 }, 00:12:09.406 { 00:12:09.406 "subsystem": "accel", 00:12:09.406 "config": [ 00:12:09.406 { 00:12:09.406 "method": "accel_set_options", 00:12:09.406 "params": { 00:12:09.406 "small_cache_size": 128, 00:12:09.406 "large_cache_size": 16, 00:12:09.406 "task_count": 2048, 00:12:09.406 "sequence_count": 2048, 00:12:09.406 "buf_count": 2048 00:12:09.406 } 00:12:09.406 } 00:12:09.406 ] 00:12:09.406 }, 00:12:09.406 { 00:12:09.406 "subsystem": "bdev", 00:12:09.406 "config": [ 00:12:09.406 { 00:12:09.406 "method": "bdev_set_options", 00:12:09.406 "params": { 00:12:09.406 "bdev_io_pool_size": 65535, 00:12:09.406 "bdev_io_cache_size": 256, 00:12:09.406 "bdev_auto_examine": true, 00:12:09.406 "iobuf_small_cache_size": 128, 00:12:09.406 "iobuf_large_cache_size": 16 00:12:09.406 } 00:12:09.406 }, 00:12:09.406 { 00:12:09.406 "method": "bdev_raid_set_options", 00:12:09.406 "params": { 00:12:09.406 "process_window_size_kb": 1024 00:12:09.406 } 00:12:09.406 }, 00:12:09.406 { 00:12:09.406 "method": "bdev_iscsi_set_options", 00:12:09.406 "params": { 00:12:09.406 "timeout_sec": 30 00:12:09.406 } 00:12:09.406 }, 00:12:09.406 { 00:12:09.406 "method": "bdev_nvme_set_options", 00:12:09.406 "params": { 00:12:09.406 "action_on_timeout": "none", 00:12:09.406 "timeout_us": 0, 00:12:09.406 "timeout_admin_us": 0, 00:12:09.406 "keep_alive_timeout_ms": 10000, 00:12:09.406 "transport_retry_count": 4, 00:12:09.406 "arbitration_burst": 0, 00:12:09.406 "low_priority_weight": 0, 00:12:09.406 "medium_priority_weight": 0, 00:12:09.406 "high_priority_weight": 0, 00:12:09.406 "nvme_adminq_poll_period_us": 10000, 00:12:09.406 "nvme_ioq_poll_period_us": 0, 00:12:09.406 "io_queue_requests": 512, 00:12:09.406 "delay_cmd_submit": true, 00:12:09.406 "bdev_retry_count": 3, 00:12:09.406 "transport_ack_timeout": 0, 00:12:09.406 "ctrlr_loss_timeout_sec": 0, 00:12:09.406 "reconnect_delay_sec": 0, 00:12:09.406 "fast_io_fail_timeout_sec": 0, 00:12:09.406 "generate_uuids": false, 00:12:09.406 "transport_tos": 0, 00:12:09.406 "io_path_stat": false, 00:12:09.406 "allow_accel_sequence": false 00:12:09.406 } 00:12:09.406 }, 00:12:09.406 { 00:12:09.406 "method": "bdev_nvme_attach_controller", 00:12:09.406 "params": { 00:12:09.406 "name": "TLSTEST", 00:12:09.406 "trtype": "TCP", 00:12:09.406 "adrfam": "IPv4", 00:12:09.406 "traddr": "10.0.0.2", 00:12:09.406 "trsvcid": "4420", 00:12:09.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.406 "prchk_reftag": false, 00:12:09.406 "prchk_guard": false, 00:12:09.406 "ctrlr_loss_timeout_sec": 0, 00:12:09.406 "reconnect_delay_sec": 0, 00:12:09.406 "fast_io_fail_timeout_sec": 0, 00:12:09.406 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:09.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:09.406 "hdgst": false, 00:12:09.406 "ddgst": false 00:12:09.406 } 00:12:09.406 }, 00:12:09.406 { 00:12:09.406 "method": "bdev_nvme_set_hotplug", 00:12:09.406 "params": { 00:12:09.406 "period_us": 100000, 00:12:09.406 "enable": false 00:12:09.406 } 00:12:09.406 }, 00:12:09.406 { 00:12:09.406 "method": "bdev_wait_for_examine" 00:12:09.406 } 00:12:09.406 ] 00:12:09.406 }, 00:12:09.406 { 00:12:09.406 "subsystem": "nbd", 00:12:09.406 "config": [] 00:12:09.406 } 00:12:09.406 ] 00:12:09.406 }' 00:12:09.406 07:58:15 -- target/tls.sh@208 -- # killprocess 74688 00:12:09.406 07:58:15 -- common/autotest_common.sh@926 -- # '[' -z 74688 ']' 00:12:09.406 07:58:15 -- common/autotest_common.sh@930 -- # kill -0 74688 00:12:09.406 07:58:15 -- common/autotest_common.sh@931 -- # uname 00:12:09.406 07:58:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:09.406 07:58:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74688 00:12:09.665 killing process with pid 74688 00:12:09.665 07:58:15 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:09.665 07:58:15 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:09.665 07:58:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74688' 00:12:09.665 07:58:15 -- common/autotest_common.sh@945 -- # kill 74688 00:12:09.665 Received shutdown signal, test time was about 10.000000 seconds 00:12:09.665 00:12:09.665 Latency(us) 00:12:09.665 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:09.665 =================================================================================================================== 00:12:09.665 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:12:09.665 07:58:15 -- common/autotest_common.sh@950 -- # wait 74688 00:12:09.665 07:58:15 -- target/tls.sh@209 -- # killprocess 74651 00:12:09.665 07:58:15 -- common/autotest_common.sh@926 -- # '[' -z 74651 ']' 00:12:09.665 07:58:15 -- common/autotest_common.sh@930 -- # kill -0 74651 00:12:09.665 07:58:15 -- common/autotest_common.sh@931 -- # uname 00:12:09.665 07:58:15 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:09.665 07:58:15 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74651 00:12:09.665 killing process with pid 74651 00:12:09.665 07:58:15 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:09.665 07:58:15 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:09.665 07:58:15 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74651' 00:12:09.665 07:58:15 -- common/autotest_common.sh@945 -- # kill 74651 00:12:09.665 07:58:15 -- common/autotest_common.sh@950 -- # wait 74651 00:12:09.925 07:58:15 -- target/tls.sh@212 -- # echo '{ 00:12:09.925 "subsystems": [ 00:12:09.925 { 00:12:09.925 "subsystem": "iobuf", 00:12:09.925 "config": [ 00:12:09.925 { 00:12:09.925 "method": "iobuf_set_options", 00:12:09.925 "params": { 00:12:09.925 "small_pool_count": 8192, 00:12:09.925 "large_pool_count": 1024, 00:12:09.925 "small_bufsize": 8192, 00:12:09.925 "large_bufsize": 135168 00:12:09.925 } 00:12:09.925 } 00:12:09.925 ] 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "subsystem": "sock", 00:12:09.925 "config": [ 00:12:09.925 { 00:12:09.925 "method": "sock_impl_set_options", 00:12:09.925 "params": { 00:12:09.925 "impl_name": "uring", 00:12:09.925 "recv_buf_size": 2097152, 00:12:09.925 "send_buf_size": 2097152, 00:12:09.925 "enable_recv_pipe": true, 00:12:09.925 "enable_quickack": false, 00:12:09.925 "enable_placement_id": 0, 00:12:09.925 "enable_zerocopy_send_server": false, 00:12:09.925 "enable_zerocopy_send_client": false, 00:12:09.925 "zerocopy_threshold": 0, 00:12:09.925 "tls_version": 0, 00:12:09.925 "enable_ktls": false 00:12:09.925 } 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "method": "sock_impl_set_options", 00:12:09.925 "params": { 00:12:09.925 "impl_name": "posix", 00:12:09.925 "recv_buf_size": 2097152, 00:12:09.925 "send_buf_size": 2097152, 00:12:09.925 "enable_recv_pipe": true, 00:12:09.925 "enable_quickack": false, 00:12:09.925 "enable_placement_id": 0, 00:12:09.925 "enable_zerocopy_send_server": true, 00:12:09.925 "enable_zerocopy_send_client": false, 00:12:09.925 "zerocopy_threshold": 0, 00:12:09.925 "tls_version": 0, 00:12:09.925 "enable_ktls": false 00:12:09.925 } 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "method": "sock_impl_set_options", 00:12:09.925 "params": { 00:12:09.925 "impl_name": "ssl", 00:12:09.925 "recv_buf_size": 4096, 00:12:09.925 "send_buf_size": 4096, 00:12:09.925 "enable_recv_pipe": true, 00:12:09.925 "enable_quickack": false, 00:12:09.925 "enable_placement_id": 0, 00:12:09.925 "enable_zerocopy_send_server": true, 00:12:09.925 "enable_zerocopy_send_client": false, 00:12:09.925 "zerocopy_threshold": 0, 00:12:09.925 "tls_version": 0, 00:12:09.925 "enable_ktls": false 00:12:09.925 } 00:12:09.925 } 00:12:09.925 ] 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "subsystem": "vmd", 00:12:09.925 "config": [] 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "subsystem": "accel", 00:12:09.925 "config": [ 00:12:09.925 { 00:12:09.925 "method": "accel_set_options", 00:12:09.925 "params": { 00:12:09.925 "small_cache_size": 128, 00:12:09.925 "large_cache_size": 16, 00:12:09.925 "task_count": 2048, 00:12:09.925 "sequence_count": 2048, 00:12:09.925 "buf_count": 2048 00:12:09.925 } 00:12:09.925 } 00:12:09.925 ] 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "subsystem": "bdev", 00:12:09.925 "config": [ 00:12:09.925 { 00:12:09.925 "method": "bdev_set_options", 00:12:09.925 "params": { 00:12:09.925 "bdev_io_pool_size": 65535, 00:12:09.925 "bdev_io_cache_size": 256, 00:12:09.925 "bdev_auto_examine": true, 00:12:09.925 "iobuf_small_cache_size": 128, 00:12:09.925 "iobuf_large_cache_size": 16 00:12:09.925 } 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "method": "bdev_raid_set_options", 00:12:09.925 "params": { 00:12:09.925 "process_window_size_kb": 1024 00:12:09.925 } 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "method": "bdev_iscsi_set_options", 00:12:09.925 "params": { 00:12:09.925 "timeout_sec": 30 00:12:09.925 } 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "method": "bdev_nvme_set_options", 00:12:09.925 "params": { 00:12:09.925 "action_on_timeout": "none", 00:12:09.925 "timeout_us": 0, 00:12:09.925 "timeout_admin_us": 0, 00:12:09.925 "keep_alive_timeout_ms": 10000, 00:12:09.925 "transport_retry_count": 4, 00:12:09.925 "arbitration_burst": 0, 00:12:09.925 "low_priority_weight": 0, 00:12:09.925 "medium_priority_weight": 0, 00:12:09.925 "high_priority_weight": 0, 00:12:09.925 "nvme_adminq_poll_period_us": 10000, 00:12:09.925 "nvme_ioq_poll_period_us": 0, 00:12:09.925 "io_queue_requests": 0, 00:12:09.925 "delay_cmd_submit": true, 00:12:09.925 "bdev_retry_count": 3, 00:12:09.925 "transport_ack_timeout": 0, 00:12:09.925 "ctrlr_loss_timeout_sec": 0, 00:12:09.925 "reconnect_delay_sec": 0, 00:12:09.925 "fast_io_fail_timeout_sec": 0, 00:12:09.925 "generate_uuids": false, 00:12:09.925 "transport_tos": 0, 00:12:09.925 "io_path_stat": false, 00:12:09.925 "allow_accel_sequence": false 00:12:09.925 } 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "method": "bdev_nvme_set_hotplug", 00:12:09.925 "params": { 00:12:09.925 "period_us": 100000, 00:12:09.925 "enable": false 00:12:09.925 } 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "method": "bdev_malloc_create", 00:12:09.925 "params": { 00:12:09.925 "name": "malloc0", 00:12:09.925 "num_blocks": 8192, 00:12:09.925 "block_size": 4096, 00:12:09.925 "physical_block_size": 4096, 00:12:09.925 "uuid": "96e6f326-643a-45ee-b8be-d82980c63a4e", 00:12:09.925 "optimal_io_boundary": 0 00:12:09.925 } 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "method": "bdev_wait_for_examine" 00:12:09.925 } 00:12:09.925 ] 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "subsystem": "nbd", 00:12:09.925 "config": [] 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "subsystem": "scheduler", 00:12:09.925 "config": [ 00:12:09.925 { 00:12:09.925 "method": "framework_set_scheduler", 00:12:09.925 "params": { 00:12:09.925 "name": "static" 00:12:09.925 } 00:12:09.925 } 00:12:09.925 ] 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "subsystem": "nvmf", 00:12:09.925 "config": [ 00:12:09.925 { 00:12:09.925 "method": "nvmf_set_config", 00:12:09.925 "params": { 00:12:09.925 "discovery_filter": "match_any", 00:12:09.925 "admin_cmd_passthru": { 00:12:09.925 "identify_ctrlr": false 00:12:09.925 } 00:12:09.925 } 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "method": "nvmf_set_max_subsystems", 00:12:09.925 "params": { 00:12:09.925 "max_subsystems": 1024 00:12:09.925 } 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "method": "nvmf_set_crdt", 00:12:09.925 "params": { 00:12:09.925 "crdt1": 0, 00:12:09.925 "crdt2": 0, 00:12:09.925 "crdt3": 0 00:12:09.925 } 00:12:09.925 }, 00:12:09.925 { 00:12:09.925 "method": "nvmf_create_transport", 00:12:09.925 "params": { 00:12:09.925 "trtype": "TCP", 00:12:09.925 "max_queue_depth": 128, 00:12:09.925 "max_io_qpairs_per_ctrlr": 127, 00:12:09.925 "in_capsule_data_size": 4096, 00:12:09.925 "max_io_size": 131072, 00:12:09.925 "io_unit_size": 131072, 00:12:09.925 "max_aq_depth": 128, 00:12:09.925 "num_shared_buffers": 511, 00:12:09.925 "buf_cache_size": 4294967295, 00:12:09.926 "dif_insert_or_strip": false, 00:12:09.926 "zcopy": false, 00:12:09.926 "c2h_success": false, 00:12:09.926 "sock_priority": 0, 00:12:09.926 "abort_timeout_sec": 1 00:12:09.926 } 00:12:09.926 }, 00:12:09.926 { 00:12:09.926 "method": "nvmf_create_subsystem", 00:12:09.926 "params": { 00:12:09.926 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.926 "allow_any_host": false, 00:12:09.926 "serial_number": "SPDK00000000000001", 00:12:09.926 "model_number": "SPDK bdev Controller", 00:12:09.926 "max_namespaces": 10, 00:12:09.926 "min_cntlid": 1, 00:12:09.926 "max_cntlid": 65519, 00:12:09.926 "ana_reporting": false 00:12:09.926 } 00:12:09.926 }, 00:12:09.926 { 00:12:09.926 "method": "nvmf_subsystem_add_host", 00:12:09.926 "params": { 00:12:09.926 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.926 "host": "nqn.2016-06.io.spdk:host1", 00:12:09.926 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt" 00:12:09.926 } 00:12:09.926 }, 00:12:09.926 { 00:12:09.926 "method": "nvmf_subsystem_add_ns", 00:12:09.926 "params": { 00:12:09.926 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.926 "namespace": { 00:12:09.926 "nsid": 1, 00:12:09.926 "bdev_name": "malloc0", 00:12:09.926 "nguid": "96E6F326643A45EEB8BED82980C63A4E", 00:12:09.926 "uuid": "96e6f326-643a-45ee-b8be-d82980c63a4e" 00:12:09.926 } 00:12:09.926 } 00:12:09.926 }, 00:12:09.926 { 00:12:09.926 "method": "nvmf_subsystem_add_listener", 00:12:09.926 "params": { 00:12:09.926 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:09.926 "listen_address": { 00:12:09.926 "trtype": "TCP", 00:12:09.926 "adrfam": "IPv4", 00:12:09.926 "traddr": "10.0.0.2", 00:12:09.926 "trsvcid": "4420" 00:12:09.926 }, 00:12:09.926 "secure_channel": true 00:12:09.926 } 00:12:09.926 } 00:12:09.926 ] 00:12:09.926 } 00:12:09.926 ] 00:12:09.926 }' 00:12:09.926 07:58:15 -- target/tls.sh@212 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:12:09.926 07:58:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:09.926 07:58:15 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:09.926 07:58:15 -- common/autotest_common.sh@10 -- # set +x 00:12:09.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.926 07:58:15 -- nvmf/common.sh@469 -- # nvmfpid=74721 00:12:09.926 07:58:15 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:12:09.926 07:58:15 -- nvmf/common.sh@470 -- # waitforlisten 74721 00:12:09.926 07:58:15 -- common/autotest_common.sh@819 -- # '[' -z 74721 ']' 00:12:09.926 07:58:15 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.926 07:58:15 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:09.926 07:58:15 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.926 07:58:15 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:09.926 07:58:15 -- common/autotest_common.sh@10 -- # set +x 00:12:09.926 [2024-07-13 07:58:15.582700] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:09.926 [2024-07-13 07:58:15.583010] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.926 [2024-07-13 07:58:15.721701] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:10.185 [2024-07-13 07:58:15.753899] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:10.185 [2024-07-13 07:58:15.754314] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:10.185 [2024-07-13 07:58:15.754447] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:10.185 [2024-07-13 07:58:15.754465] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:10.185 [2024-07-13 07:58:15.754498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:10.185 [2024-07-13 07:58:15.929967] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:10.185 [2024-07-13 07:58:15.961931] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:10.185 [2024-07-13 07:58:15.962294] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:10.752 07:58:16 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:10.752 07:58:16 -- common/autotest_common.sh@852 -- # return 0 00:12:10.752 07:58:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:10.752 07:58:16 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:10.752 07:58:16 -- common/autotest_common.sh@10 -- # set +x 00:12:10.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:10.752 07:58:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:10.752 07:58:16 -- target/tls.sh@216 -- # bdevperf_pid=74747 00:12:10.752 07:58:16 -- target/tls.sh@217 -- # waitforlisten 74747 /var/tmp/bdevperf.sock 00:12:10.752 07:58:16 -- common/autotest_common.sh@819 -- # '[' -z 74747 ']' 00:12:10.752 07:58:16 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:10.752 07:58:16 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:10.752 07:58:16 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:10.752 07:58:16 -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:12:10.752 07:58:16 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:10.752 07:58:16 -- target/tls.sh@213 -- # echo '{ 00:12:10.752 "subsystems": [ 00:12:10.752 { 00:12:10.752 "subsystem": "iobuf", 00:12:10.752 "config": [ 00:12:10.752 { 00:12:10.752 "method": "iobuf_set_options", 00:12:10.752 "params": { 00:12:10.753 "small_pool_count": 8192, 00:12:10.753 "large_pool_count": 1024, 00:12:10.753 "small_bufsize": 8192, 00:12:10.753 "large_bufsize": 135168 00:12:10.753 } 00:12:10.753 } 00:12:10.753 ] 00:12:10.753 }, 00:12:10.753 { 00:12:10.753 "subsystem": "sock", 00:12:10.753 "config": [ 00:12:10.753 { 00:12:10.753 "method": "sock_impl_set_options", 00:12:10.753 "params": { 00:12:10.753 "impl_name": "uring", 00:12:10.753 "recv_buf_size": 2097152, 00:12:10.753 "send_buf_size": 2097152, 00:12:10.753 "enable_recv_pipe": true, 00:12:10.753 "enable_quickack": false, 00:12:10.753 "enable_placement_id": 0, 00:12:10.753 "enable_zerocopy_send_server": false, 00:12:10.753 "enable_zerocopy_send_client": false, 00:12:10.753 "zerocopy_threshold": 0, 00:12:10.753 "tls_version": 0, 00:12:10.753 "enable_ktls": false 00:12:10.753 } 00:12:10.753 }, 00:12:10.753 { 00:12:10.753 "method": "sock_impl_set_options", 00:12:10.753 "params": { 00:12:10.753 "impl_name": "posix", 00:12:10.753 "recv_buf_size": 2097152, 00:12:10.753 "send_buf_size": 2097152, 00:12:10.753 "enable_recv_pipe": true, 00:12:10.753 "enable_quickack": false, 00:12:10.753 "enable_placement_id": 0, 00:12:10.753 "enable_zerocopy_send_server": true, 00:12:10.753 "enable_zerocopy_send_client": false, 00:12:10.753 "zerocopy_threshold": 0, 00:12:10.753 "tls_version": 0, 00:12:10.753 "enable_ktls": false 00:12:10.753 } 00:12:10.753 }, 00:12:10.753 { 00:12:10.753 "method": "sock_impl_set_options", 00:12:10.753 "params": { 00:12:10.753 "impl_name": "ssl", 00:12:10.753 "recv_buf_size": 4096, 00:12:10.753 "send_buf_size": 4096, 00:12:10.753 "enable_recv_pipe": true, 00:12:10.753 "enable_quickack": false, 00:12:10.753 "enable_placement_id": 0, 00:12:10.753 "enable_zerocopy_send_server": true, 00:12:10.753 "enable_zerocopy_send_client": false, 00:12:10.753 "zerocopy_threshold": 0, 00:12:10.753 "tls_version": 0, 00:12:10.753 "enable_ktls": false 00:12:10.753 } 00:12:10.753 } 00:12:10.753 ] 00:12:10.753 }, 00:12:10.753 { 00:12:10.753 "subsystem": "vmd", 00:12:10.753 "config": [] 00:12:10.753 }, 00:12:10.753 { 00:12:10.753 "subsystem": "accel", 00:12:10.753 "config": [ 00:12:10.753 { 00:12:10.753 "method": "accel_set_options", 00:12:10.753 "params": { 00:12:10.753 "small_cache_size": 128, 00:12:10.753 "large_cache_size": 16, 00:12:10.753 "task_count": 2048, 00:12:10.753 "sequence_count": 2048, 00:12:10.753 "buf_count": 2048 00:12:10.753 } 00:12:10.753 } 00:12:10.753 ] 00:12:10.753 }, 00:12:10.753 { 00:12:10.753 "subsystem": "bdev", 00:12:10.753 "config": [ 00:12:10.753 { 00:12:10.753 "method": "bdev_set_options", 00:12:10.753 "params": { 00:12:10.753 "bdev_io_pool_size": 65535, 00:12:10.753 "bdev_io_cache_size": 256, 00:12:10.753 "bdev_auto_examine": true, 00:12:10.753 "iobuf_small_cache_size": 128, 00:12:10.753 "iobuf_large_cache_size": 16 00:12:10.753 } 00:12:10.753 }, 00:12:10.753 { 00:12:10.753 "method": "bdev_raid_set_options", 00:12:10.753 "params": { 00:12:10.753 "process_window_size_kb": 1024 00:12:10.753 } 00:12:10.753 }, 00:12:10.753 { 00:12:10.753 "method": "bdev_iscsi_set_options", 00:12:10.753 "params": { 00:12:10.753 "timeout_sec": 30 00:12:10.753 } 00:12:10.753 }, 00:12:10.753 { 00:12:10.753 "method": "bdev_nvme_set_options", 00:12:10.753 "params": { 00:12:10.753 "action_on_timeout": "none", 00:12:10.753 "timeout_us": 0, 00:12:10.753 "timeout_admin_us": 0, 00:12:10.753 "keep_alive_timeout_ms": 10000, 00:12:10.753 "transport_retry_count": 4, 00:12:10.753 "arbitration_burst": 0, 00:12:10.753 "low_priority_weight": 0, 00:12:10.753 "medium_priority_weight": 0, 00:12:10.753 "high_priority_weight": 0, 00:12:10.753 "nvme_adminq_poll_period_us": 10000, 00:12:10.753 "nvme_ioq_poll_period_us": 0, 00:12:10.753 "io_queue_requests": 512, 00:12:10.753 "delay_cmd_submit": true, 00:12:10.753 "bdev_retry_count": 3, 00:12:10.753 "transport_ack_timeout": 0, 00:12:10.753 "ctrlr_loss_timeout_sec": 0, 00:12:10.753 "reconnect_delay_sec": 0, 00:12:10.753 "fast_io_fail_timeout_sec": 0, 00:12:10.753 "generate_uuids": false, 00:12:10.753 "transport_tos": 0, 00:12:10.753 "io_path_stat": false, 00:12:10.753 "allow_accel_sequence": false 00:12:10.753 } 00:12:10.753 }, 00:12:10.753 { 00:12:10.753 "method": "bdev_nvme_attach_controller", 00:12:10.753 "params": { 00:12:10.753 "name": "TLSTEST", 00:12:10.753 "trtype": "TCP", 00:12:10.753 "adrfam": "IPv4", 00:12:10.753 "traddr": "10.0.0.2", 00:12:10.753 "trsvcid": "4420", 00:12:10.753 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:10.753 "prchk_reftag": false, 00:12:10.753 "prchk_guard": false, 00:12:10.753 "ctrlr_loss_timeout_sec": 0, 00:12:10.753 "reconnect_delay_sec": 0, 00:12:10.753 "fast_io_fail_timeout_sec": 0, 00:12:10.753 "psk": "/home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt", 00:12:10.753 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:10.753 "hdgst": false, 00:12:10.753 "ddgst": false 00:12:10.753 } 00:12:10.753 }, 00:12:10.753 { 00:12:10.753 "method": "bdev_nvme_set_hotplug", 00:12:10.753 "params": { 00:12:10.753 "period_us": 100000, 00:12:10.753 "enable": false 00:12:10.753 } 00:12:10.753 }, 00:12:10.753 { 00:12:10.753 "method": "bdev_wait_for_examine" 00:12:10.753 } 00:12:10.753 ] 00:12:10.753 }, 00:12:10.753 { 00:12:10.753 "subsystem": "nbd", 00:12:10.753 "config": [] 00:12:10.753 } 00:12:10.753 ] 00:12:10.753 }' 00:12:10.753 07:58:16 -- common/autotest_common.sh@10 -- # set +x 00:12:10.753 [2024-07-13 07:58:16.560461] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:10.753 [2024-07-13 07:58:16.560736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74747 ] 00:12:11.012 [2024-07-13 07:58:16.698974] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:11.012 [2024-07-13 07:58:16.730725] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:11.271 [2024-07-13 07:58:16.851291] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:11.839 07:58:17 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:11.839 07:58:17 -- common/autotest_common.sh@852 -- # return 0 00:12:11.839 07:58:17 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:12:11.839 Running I/O for 10 seconds... 00:12:21.847 00:12:21.847 Latency(us) 00:12:21.847 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:21.847 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:21.847 Verification LBA range: start 0x0 length 0x2000 00:12:21.847 TLSTESTn1 : 10.01 6071.43 23.72 0.00 0.00 21045.85 5510.98 20971.52 00:12:21.847 =================================================================================================================== 00:12:21.847 Total : 6071.43 23.72 0.00 0.00 21045.85 5510.98 20971.52 00:12:21.847 0 00:12:21.847 07:58:27 -- target/tls.sh@222 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:21.847 07:58:27 -- target/tls.sh@223 -- # killprocess 74747 00:12:21.847 07:58:27 -- common/autotest_common.sh@926 -- # '[' -z 74747 ']' 00:12:21.847 07:58:27 -- common/autotest_common.sh@930 -- # kill -0 74747 00:12:21.847 07:58:27 -- common/autotest_common.sh@931 -- # uname 00:12:21.847 07:58:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:21.847 07:58:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74747 00:12:22.107 killing process with pid 74747 00:12:22.107 Received shutdown signal, test time was about 10.000000 seconds 00:12:22.107 00:12:22.107 Latency(us) 00:12:22.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:22.107 =================================================================================================================== 00:12:22.107 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:22.107 07:58:27 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:22.107 07:58:27 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:22.107 07:58:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74747' 00:12:22.107 07:58:27 -- common/autotest_common.sh@945 -- # kill 74747 00:12:22.107 07:58:27 -- common/autotest_common.sh@950 -- # wait 74747 00:12:22.107 07:58:27 -- target/tls.sh@224 -- # killprocess 74721 00:12:22.107 07:58:27 -- common/autotest_common.sh@926 -- # '[' -z 74721 ']' 00:12:22.107 07:58:27 -- common/autotest_common.sh@930 -- # kill -0 74721 00:12:22.107 07:58:27 -- common/autotest_common.sh@931 -- # uname 00:12:22.107 07:58:27 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:22.107 07:58:27 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 74721 00:12:22.107 killing process with pid 74721 00:12:22.107 07:58:27 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:22.107 07:58:27 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:22.107 07:58:27 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 74721' 00:12:22.107 07:58:27 -- common/autotest_common.sh@945 -- # kill 74721 00:12:22.107 07:58:27 -- common/autotest_common.sh@950 -- # wait 74721 00:12:22.366 07:58:27 -- target/tls.sh@226 -- # trap - SIGINT SIGTERM EXIT 00:12:22.366 07:58:27 -- target/tls.sh@227 -- # cleanup 00:12:22.366 07:58:27 -- target/tls.sh@15 -- # process_shm --id 0 00:12:22.366 07:58:27 -- common/autotest_common.sh@796 -- # type=--id 00:12:22.366 07:58:27 -- common/autotest_common.sh@797 -- # id=0 00:12:22.366 07:58:27 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:12:22.366 07:58:27 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:22.366 07:58:27 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:12:22.366 07:58:27 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:12:22.366 07:58:27 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:12:22.366 07:58:27 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:22.366 nvmf_trace.0 00:12:22.366 07:58:28 -- common/autotest_common.sh@811 -- # return 0 00:12:22.366 07:58:28 -- target/tls.sh@16 -- # killprocess 74747 00:12:22.366 07:58:28 -- common/autotest_common.sh@926 -- # '[' -z 74747 ']' 00:12:22.366 07:58:28 -- common/autotest_common.sh@930 -- # kill -0 74747 00:12:22.366 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (74747) - No such process 00:12:22.366 Process with pid 74747 is not found 00:12:22.366 07:58:28 -- common/autotest_common.sh@953 -- # echo 'Process with pid 74747 is not found' 00:12:22.366 07:58:28 -- target/tls.sh@17 -- # nvmftestfini 00:12:22.366 07:58:28 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:22.366 07:58:28 -- nvmf/common.sh@116 -- # sync 00:12:22.366 07:58:28 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:22.366 07:58:28 -- nvmf/common.sh@119 -- # set +e 00:12:22.366 07:58:28 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:22.366 07:58:28 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:22.366 rmmod nvme_tcp 00:12:22.366 rmmod nvme_fabrics 00:12:22.366 rmmod nvme_keyring 00:12:22.366 07:58:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:22.366 07:58:28 -- nvmf/common.sh@123 -- # set -e 00:12:22.366 07:58:28 -- nvmf/common.sh@124 -- # return 0 00:12:22.366 07:58:28 -- nvmf/common.sh@477 -- # '[' -n 74721 ']' 00:12:22.366 07:58:28 -- nvmf/common.sh@478 -- # killprocess 74721 00:12:22.366 07:58:28 -- common/autotest_common.sh@926 -- # '[' -z 74721 ']' 00:12:22.366 Process with pid 74721 is not found 00:12:22.366 07:58:28 -- common/autotest_common.sh@930 -- # kill -0 74721 00:12:22.366 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (74721) - No such process 00:12:22.366 07:58:28 -- common/autotest_common.sh@953 -- # echo 'Process with pid 74721 is not found' 00:12:22.366 07:58:28 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:22.366 07:58:28 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:22.366 07:58:28 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:22.367 07:58:28 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:22.367 07:58:28 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:22.367 07:58:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.367 07:58:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.367 07:58:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.626 07:58:28 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:22.626 07:58:28 -- target/tls.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/key1.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key2.txt /home/vagrant/spdk_repo/spdk/test/nvmf/target/key_long.txt 00:12:22.626 ************************************ 00:12:22.626 END TEST nvmf_tls 00:12:22.626 ************************************ 00:12:22.626 00:12:22.626 real 1m8.053s 00:12:22.626 user 1m45.154s 00:12:22.626 sys 0m23.555s 00:12:22.626 07:58:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:22.626 07:58:28 -- common/autotest_common.sh@10 -- # set +x 00:12:22.626 07:58:28 -- nvmf/nvmf.sh@60 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:22.626 07:58:28 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:22.626 07:58:28 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:22.626 07:58:28 -- common/autotest_common.sh@10 -- # set +x 00:12:22.626 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:12:22.626 ************************************ 00:12:22.626 START TEST nvmf_fips 00:12:22.626 ************************************ 00:12:22.626 07:58:28 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:12:22.626 * Looking for test storage... 00:12:22.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:12:22.626 07:58:28 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:22.626 07:58:28 -- nvmf/common.sh@7 -- # uname -s 00:12:22.626 07:58:28 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:22.626 07:58:28 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:22.626 07:58:28 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:22.626 07:58:28 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:22.626 07:58:28 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:22.626 07:58:28 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:22.626 07:58:28 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:22.626 07:58:28 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:22.626 07:58:28 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:22.626 07:58:28 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:22.626 07:58:28 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:12:22.626 07:58:28 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:12:22.627 07:58:28 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:22.627 07:58:28 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:22.627 07:58:28 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:22.627 07:58:28 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:22.627 07:58:28 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:22.627 07:58:28 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:22.627 07:58:28 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:22.627 07:58:28 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.627 07:58:28 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.627 07:58:28 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.627 07:58:28 -- paths/export.sh@5 -- # export PATH 00:12:22.627 07:58:28 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:22.627 07:58:28 -- nvmf/common.sh@46 -- # : 0 00:12:22.627 07:58:28 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:22.627 07:58:28 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:22.627 07:58:28 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:22.627 07:58:28 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:22.627 07:58:28 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:22.627 07:58:28 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:22.627 07:58:28 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:22.627 07:58:28 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:22.627 07:58:28 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:22.627 07:58:28 -- fips/fips.sh@89 -- # check_openssl_version 00:12:22.627 07:58:28 -- fips/fips.sh@83 -- # local target=3.0.0 00:12:22.627 07:58:28 -- fips/fips.sh@85 -- # openssl version 00:12:22.627 07:58:28 -- fips/fips.sh@85 -- # awk '{print $2}' 00:12:22.627 07:58:28 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:12:22.627 07:58:28 -- scripts/common.sh@375 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:12:22.627 07:58:28 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:12:22.627 07:58:28 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:12:22.627 07:58:28 -- scripts/common.sh@335 -- # IFS=.-: 00:12:22.627 07:58:28 -- scripts/common.sh@335 -- # read -ra ver1 00:12:22.627 07:58:28 -- scripts/common.sh@336 -- # IFS=.-: 00:12:22.627 07:58:28 -- scripts/common.sh@336 -- # read -ra ver2 00:12:22.627 07:58:28 -- scripts/common.sh@337 -- # local 'op=>=' 00:12:22.627 07:58:28 -- scripts/common.sh@339 -- # ver1_l=3 00:12:22.627 07:58:28 -- scripts/common.sh@340 -- # ver2_l=3 00:12:22.627 07:58:28 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:12:22.627 07:58:28 -- scripts/common.sh@343 -- # case "$op" in 00:12:22.627 07:58:28 -- scripts/common.sh@347 -- # : 1 00:12:22.627 07:58:28 -- scripts/common.sh@363 -- # (( v = 0 )) 00:12:22.627 07:58:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.627 07:58:28 -- scripts/common.sh@364 -- # decimal 3 00:12:22.627 07:58:28 -- scripts/common.sh@352 -- # local d=3 00:12:22.627 07:58:28 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:22.627 07:58:28 -- scripts/common.sh@354 -- # echo 3 00:12:22.627 07:58:28 -- scripts/common.sh@364 -- # ver1[v]=3 00:12:22.627 07:58:28 -- scripts/common.sh@365 -- # decimal 3 00:12:22.627 07:58:28 -- scripts/common.sh@352 -- # local d=3 00:12:22.627 07:58:28 -- scripts/common.sh@353 -- # [[ 3 =~ ^[0-9]+$ ]] 00:12:22.627 07:58:28 -- scripts/common.sh@354 -- # echo 3 00:12:22.627 07:58:28 -- scripts/common.sh@365 -- # ver2[v]=3 00:12:22.627 07:58:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:22.627 07:58:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:22.627 07:58:28 -- scripts/common.sh@363 -- # (( v++ )) 00:12:22.627 07:58:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.627 07:58:28 -- scripts/common.sh@364 -- # decimal 0 00:12:22.627 07:58:28 -- scripts/common.sh@352 -- # local d=0 00:12:22.627 07:58:28 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:22.627 07:58:28 -- scripts/common.sh@354 -- # echo 0 00:12:22.627 07:58:28 -- scripts/common.sh@364 -- # ver1[v]=0 00:12:22.627 07:58:28 -- scripts/common.sh@365 -- # decimal 0 00:12:22.627 07:58:28 -- scripts/common.sh@352 -- # local d=0 00:12:22.627 07:58:28 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:22.627 07:58:28 -- scripts/common.sh@354 -- # echo 0 00:12:22.627 07:58:28 -- scripts/common.sh@365 -- # ver2[v]=0 00:12:22.627 07:58:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:22.627 07:58:28 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:12:22.627 07:58:28 -- scripts/common.sh@363 -- # (( v++ )) 00:12:22.627 07:58:28 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:22.627 07:58:28 -- scripts/common.sh@364 -- # decimal 9 00:12:22.627 07:58:28 -- scripts/common.sh@352 -- # local d=9 00:12:22.627 07:58:28 -- scripts/common.sh@353 -- # [[ 9 =~ ^[0-9]+$ ]] 00:12:22.627 07:58:28 -- scripts/common.sh@354 -- # echo 9 00:12:22.627 07:58:28 -- scripts/common.sh@364 -- # ver1[v]=9 00:12:22.627 07:58:28 -- scripts/common.sh@365 -- # decimal 0 00:12:22.627 07:58:28 -- scripts/common.sh@352 -- # local d=0 00:12:22.627 07:58:28 -- scripts/common.sh@353 -- # [[ 0 =~ ^[0-9]+$ ]] 00:12:22.627 07:58:28 -- scripts/common.sh@354 -- # echo 0 00:12:22.627 07:58:28 -- scripts/common.sh@365 -- # ver2[v]=0 00:12:22.627 07:58:28 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:12:22.627 07:58:28 -- scripts/common.sh@366 -- # return 0 00:12:22.627 07:58:28 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:12:22.627 07:58:28 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:12:22.627 07:58:28 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:12:22.628 07:58:28 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:12:22.628 07:58:28 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:12:22.628 07:58:28 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:12:22.628 07:58:28 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:12:22.628 07:58:28 -- fips/fips.sh@105 -- # export OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:12:22.628 07:58:28 -- fips/fips.sh@105 -- # OPENSSL_FORCE_FIPS_MODE=build_openssl_config 00:12:22.628 07:58:28 -- fips/fips.sh@114 -- # build_openssl_config 00:12:22.628 07:58:28 -- fips/fips.sh@37 -- # cat 00:12:22.628 07:58:28 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:12:22.628 07:58:28 -- fips/fips.sh@58 -- # cat - 00:12:22.628 07:58:28 -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:12:22.628 07:58:28 -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:12:22.628 07:58:28 -- fips/fips.sh@117 -- # mapfile -t providers 00:12:22.628 07:58:28 -- fips/fips.sh@117 -- # OPENSSL_CONF=spdk_fips.conf 00:12:22.628 07:58:28 -- fips/fips.sh@117 -- # openssl list -providers 00:12:22.628 07:58:28 -- fips/fips.sh@117 -- # grep name 00:12:22.887 07:58:28 -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:12:22.887 07:58:28 -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:12:22.887 07:58:28 -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:12:22.887 07:58:28 -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:12:22.887 07:58:28 -- fips/fips.sh@128 -- # : 00:12:22.887 07:58:28 -- common/autotest_common.sh@640 -- # local es=0 00:12:22.887 07:58:28 -- common/autotest_common.sh@642 -- # valid_exec_arg openssl md5 /dev/fd/62 00:12:22.887 07:58:28 -- common/autotest_common.sh@628 -- # local arg=openssl 00:12:22.887 07:58:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:22.887 07:58:28 -- common/autotest_common.sh@632 -- # type -t openssl 00:12:22.887 07:58:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:22.887 07:58:28 -- common/autotest_common.sh@634 -- # type -P openssl 00:12:22.887 07:58:28 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:12:22.887 07:58:28 -- common/autotest_common.sh@634 -- # arg=/usr/bin/openssl 00:12:22.887 07:58:28 -- common/autotest_common.sh@634 -- # [[ -x /usr/bin/openssl ]] 00:12:22.887 07:58:28 -- common/autotest_common.sh@643 -- # openssl md5 /dev/fd/62 00:12:22.887 Error setting digest 00:12:22.887 00826AD7EE7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:12:22.887 00826AD7EE7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:12:22.887 07:58:28 -- common/autotest_common.sh@643 -- # es=1 00:12:22.887 07:58:28 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:12:22.887 07:58:28 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:12:22.887 07:58:28 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:12:22.887 07:58:28 -- fips/fips.sh@131 -- # nvmftestinit 00:12:22.887 07:58:28 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:22.887 07:58:28 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:22.887 07:58:28 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:22.887 07:58:28 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:22.887 07:58:28 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:22.887 07:58:28 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:22.887 07:58:28 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:22.887 07:58:28 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:22.887 07:58:28 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:22.887 07:58:28 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:22.887 07:58:28 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:22.887 07:58:28 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:22.887 07:58:28 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:22.887 07:58:28 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:22.887 07:58:28 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:22.887 07:58:28 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:22.887 07:58:28 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:22.887 07:58:28 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:22.887 07:58:28 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:22.887 07:58:28 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:22.887 07:58:28 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:22.887 07:58:28 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:22.887 07:58:28 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:22.887 07:58:28 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:22.887 07:58:28 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:22.887 07:58:28 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:22.887 07:58:28 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:22.887 07:58:28 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:22.887 Cannot find device "nvmf_tgt_br" 00:12:22.887 07:58:28 -- nvmf/common.sh@154 -- # true 00:12:22.887 07:58:28 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:22.887 Cannot find device "nvmf_tgt_br2" 00:12:22.887 07:58:28 -- nvmf/common.sh@155 -- # true 00:12:22.887 07:58:28 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:22.887 07:58:28 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:22.887 Cannot find device "nvmf_tgt_br" 00:12:22.887 07:58:28 -- nvmf/common.sh@157 -- # true 00:12:22.887 07:58:28 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:22.887 Cannot find device "nvmf_tgt_br2" 00:12:22.887 07:58:28 -- nvmf/common.sh@158 -- # true 00:12:22.887 07:58:28 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:22.887 07:58:28 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:22.887 07:58:28 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:22.887 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:22.887 07:58:28 -- nvmf/common.sh@161 -- # true 00:12:22.887 07:58:28 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:22.887 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:22.887 07:58:28 -- nvmf/common.sh@162 -- # true 00:12:22.887 07:58:28 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:22.887 07:58:28 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:22.887 07:58:28 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:22.887 07:58:28 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:22.887 07:58:28 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:23.146 07:58:28 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:23.146 07:58:28 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:23.146 07:58:28 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:23.146 07:58:28 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:23.146 07:58:28 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:23.146 07:58:28 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:23.146 07:58:28 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:23.146 07:58:28 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:23.146 07:58:28 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:23.146 07:58:28 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:23.146 07:58:28 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:23.146 07:58:28 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:23.146 07:58:28 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:23.146 07:58:28 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:23.146 07:58:28 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:23.146 07:58:28 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:23.146 07:58:28 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:23.146 07:58:28 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:23.146 07:58:28 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:23.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:12:23.146 00:12:23.146 --- 10.0.0.2 ping statistics --- 00:12:23.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.146 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:12:23.146 07:58:28 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:23.146 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:23.146 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:12:23.146 00:12:23.146 --- 10.0.0.3 ping statistics --- 00:12:23.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.146 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:12:23.146 07:58:28 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:23.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:12:23.146 00:12:23.146 --- 10.0.0.1 ping statistics --- 00:12:23.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.146 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:12:23.146 07:58:28 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.146 07:58:28 -- nvmf/common.sh@421 -- # return 0 00:12:23.146 07:58:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:23.146 07:58:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.146 07:58:28 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:23.146 07:58:28 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:23.146 07:58:28 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.146 07:58:28 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:23.146 07:58:28 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:23.146 07:58:28 -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:12:23.146 07:58:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:23.147 07:58:28 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:23.147 07:58:28 -- common/autotest_common.sh@10 -- # set +x 00:12:23.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.147 07:58:28 -- nvmf/common.sh@469 -- # nvmfpid=75022 00:12:23.147 07:58:28 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:23.147 07:58:28 -- nvmf/common.sh@470 -- # waitforlisten 75022 00:12:23.147 07:58:28 -- common/autotest_common.sh@819 -- # '[' -z 75022 ']' 00:12:23.147 07:58:28 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.147 07:58:28 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:23.147 07:58:28 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.147 07:58:28 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:23.147 07:58:28 -- common/autotest_common.sh@10 -- # set +x 00:12:23.406 [2024-07-13 07:58:28.965344] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:23.406 [2024-07-13 07:58:28.965650] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.406 [2024-07-13 07:58:29.106427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:23.406 [2024-07-13 07:58:29.147631] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:23.406 [2024-07-13 07:58:29.148068] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.406 [2024-07-13 07:58:29.148218] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.406 [2024-07-13 07:58:29.148391] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.406 [2024-07-13 07:58:29.148542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:24.344 07:58:29 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:24.344 07:58:29 -- common/autotest_common.sh@852 -- # return 0 00:12:24.344 07:58:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:24.344 07:58:29 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:24.344 07:58:29 -- common/autotest_common.sh@10 -- # set +x 00:12:24.344 07:58:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:24.344 07:58:29 -- fips/fips.sh@134 -- # trap cleanup EXIT 00:12:24.344 07:58:29 -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:24.344 07:58:29 -- fips/fips.sh@138 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:24.344 07:58:29 -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:12:24.344 07:58:29 -- fips/fips.sh@140 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:24.344 07:58:29 -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:24.344 07:58:29 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:24.344 07:58:29 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:24.603 [2024-07-13 07:58:30.201648] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:24.603 [2024-07-13 07:58:30.217601] tcp.c: 912:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:12:24.603 [2024-07-13 07:58:30.217800] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:24.603 malloc0 00:12:24.603 07:58:30 -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:12:24.603 07:58:30 -- fips/fips.sh@148 -- # bdevperf_pid=75054 00:12:24.603 07:58:30 -- fips/fips.sh@149 -- # waitforlisten 75054 /var/tmp/bdevperf.sock 00:12:24.603 07:58:30 -- common/autotest_common.sh@819 -- # '[' -z 75054 ']' 00:12:24.603 07:58:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:12:24.603 07:58:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:24.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:12:24.603 07:58:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:12:24.603 07:58:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:24.604 07:58:30 -- common/autotest_common.sh@10 -- # set +x 00:12:24.604 07:58:30 -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:12:24.604 [2024-07-13 07:58:30.330721] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:24.604 [2024-07-13 07:58:30.330831] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75054 ] 00:12:24.862 [2024-07-13 07:58:30.466312] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:24.862 [2024-07-13 07:58:30.506837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.798 07:58:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:25.798 07:58:31 -- common/autotest_common.sh@852 -- # return 0 00:12:25.798 07:58:31 -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:25.798 [2024-07-13 07:58:31.558761] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:12:26.056 TLSTESTn1 00:12:26.056 07:58:31 -- fips/fips.sh@155 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:12:26.056 Running I/O for 10 seconds... 00:12:36.048 00:12:36.048 Latency(us) 00:12:36.048 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.048 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:12:36.048 Verification LBA range: start 0x0 length 0x2000 00:12:36.048 TLSTESTn1 : 10.02 5640.73 22.03 0.00 0.00 22653.35 5034.36 22401.40 00:12:36.048 =================================================================================================================== 00:12:36.048 Total : 5640.73 22.03 0.00 0.00 22653.35 5034.36 22401.40 00:12:36.048 0 00:12:36.048 07:58:41 -- fips/fips.sh@1 -- # cleanup 00:12:36.048 07:58:41 -- fips/fips.sh@15 -- # process_shm --id 0 00:12:36.048 07:58:41 -- common/autotest_common.sh@796 -- # type=--id 00:12:36.048 07:58:41 -- common/autotest_common.sh@797 -- # id=0 00:12:36.048 07:58:41 -- common/autotest_common.sh@798 -- # '[' --id = --pid ']' 00:12:36.048 07:58:41 -- common/autotest_common.sh@802 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:12:36.048 07:58:41 -- common/autotest_common.sh@802 -- # shm_files=nvmf_trace.0 00:12:36.048 07:58:41 -- common/autotest_common.sh@804 -- # [[ -z nvmf_trace.0 ]] 00:12:36.048 07:58:41 -- common/autotest_common.sh@808 -- # for n in $shm_files 00:12:36.048 07:58:41 -- common/autotest_common.sh@809 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:12:36.048 nvmf_trace.0 00:12:36.048 07:58:41 -- common/autotest_common.sh@811 -- # return 0 00:12:36.048 07:58:41 -- fips/fips.sh@16 -- # killprocess 75054 00:12:36.048 07:58:41 -- common/autotest_common.sh@926 -- # '[' -z 75054 ']' 00:12:36.048 07:58:41 -- common/autotest_common.sh@930 -- # kill -0 75054 00:12:36.048 07:58:41 -- common/autotest_common.sh@931 -- # uname 00:12:36.048 07:58:41 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:36.048 07:58:41 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75054 00:12:36.307 07:58:41 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:12:36.307 07:58:41 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:12:36.307 killing process with pid 75054 00:12:36.307 07:58:41 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75054' 00:12:36.307 07:58:41 -- common/autotest_common.sh@945 -- # kill 75054 00:12:36.307 Received shutdown signal, test time was about 10.000000 seconds 00:12:36.307 00:12:36.307 Latency(us) 00:12:36.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:36.307 =================================================================================================================== 00:12:36.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:12:36.307 07:58:41 -- common/autotest_common.sh@950 -- # wait 75054 00:12:36.307 07:58:42 -- fips/fips.sh@17 -- # nvmftestfini 00:12:36.307 07:58:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:36.307 07:58:42 -- nvmf/common.sh@116 -- # sync 00:12:36.307 07:58:42 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:36.307 07:58:42 -- nvmf/common.sh@119 -- # set +e 00:12:36.307 07:58:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:36.307 07:58:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:36.307 rmmod nvme_tcp 00:12:36.307 rmmod nvme_fabrics 00:12:36.307 rmmod nvme_keyring 00:12:36.307 07:58:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:36.307 07:58:42 -- nvmf/common.sh@123 -- # set -e 00:12:36.307 07:58:42 -- nvmf/common.sh@124 -- # return 0 00:12:36.307 07:58:42 -- nvmf/common.sh@477 -- # '[' -n 75022 ']' 00:12:36.307 07:58:42 -- nvmf/common.sh@478 -- # killprocess 75022 00:12:36.307 07:58:42 -- common/autotest_common.sh@926 -- # '[' -z 75022 ']' 00:12:36.307 07:58:42 -- common/autotest_common.sh@930 -- # kill -0 75022 00:12:36.307 07:58:42 -- common/autotest_common.sh@931 -- # uname 00:12:36.565 07:58:42 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:36.565 07:58:42 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75022 00:12:36.565 07:58:42 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:12:36.565 07:58:42 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:12:36.565 killing process with pid 75022 00:12:36.565 07:58:42 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75022' 00:12:36.565 07:58:42 -- common/autotest_common.sh@945 -- # kill 75022 00:12:36.565 07:58:42 -- common/autotest_common.sh@950 -- # wait 75022 00:12:36.565 07:58:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:36.565 07:58:42 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:36.565 07:58:42 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:36.565 07:58:42 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:36.565 07:58:42 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:36.565 07:58:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.566 07:58:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.566 07:58:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.566 07:58:42 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:36.566 07:58:42 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:12:36.566 00:12:36.566 real 0m14.082s 00:12:36.566 user 0m18.979s 00:12:36.566 sys 0m5.844s 00:12:36.566 07:58:42 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.566 ************************************ 00:12:36.566 END TEST nvmf_fips 00:12:36.566 ************************************ 00:12:36.566 07:58:42 -- common/autotest_common.sh@10 -- # set +x 00:12:36.566 07:58:42 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:12:36.566 07:58:42 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:36.566 07:58:42 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:36.566 07:58:42 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:36.566 07:58:42 -- common/autotest_common.sh@10 -- # set +x 00:12:36.566 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:12:36.824 ************************************ 00:12:36.824 START TEST nvmf_fuzz 00:12:36.824 ************************************ 00:12:36.824 07:58:42 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:12:36.824 * Looking for test storage... 00:12:36.824 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:36.824 07:58:42 -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:36.824 07:58:42 -- nvmf/common.sh@7 -- # uname -s 00:12:36.824 07:58:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:36.824 07:58:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:36.824 07:58:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:36.824 07:58:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:36.824 07:58:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:36.824 07:58:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:36.824 07:58:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:36.824 07:58:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:36.824 07:58:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:36.824 07:58:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:36.824 07:58:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:12:36.824 07:58:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:12:36.824 07:58:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:36.824 07:58:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:36.824 07:58:42 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:36.824 07:58:42 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:36.824 07:58:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:36.824 07:58:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:36.824 07:58:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:36.824 07:58:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.824 07:58:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.824 07:58:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.824 07:58:42 -- paths/export.sh@5 -- # export PATH 00:12:36.824 07:58:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:36.824 07:58:42 -- nvmf/common.sh@46 -- # : 0 00:12:36.824 07:58:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:36.824 07:58:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:36.824 07:58:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:36.824 07:58:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:36.824 07:58:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:36.824 07:58:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:36.824 07:58:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:36.824 07:58:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:36.824 07:58:42 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:12:36.824 07:58:42 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:36.824 07:58:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:36.824 07:58:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:36.824 07:58:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:36.824 07:58:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:36.824 07:58:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:36.824 07:58:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:36.824 07:58:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:36.824 07:58:42 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:36.824 07:58:42 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:36.824 07:58:42 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:36.824 07:58:42 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:36.824 07:58:42 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:36.824 07:58:42 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:36.824 07:58:42 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:36.824 07:58:42 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:36.824 07:58:42 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:36.824 07:58:42 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:36.824 07:58:42 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:36.824 07:58:42 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:36.824 07:58:42 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:36.825 07:58:42 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:36.825 07:58:42 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:36.825 07:58:42 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:36.825 07:58:42 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:36.825 07:58:42 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:36.825 07:58:42 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:36.825 07:58:42 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:36.825 Cannot find device "nvmf_tgt_br" 00:12:36.825 07:58:42 -- nvmf/common.sh@154 -- # true 00:12:36.825 07:58:42 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:36.825 Cannot find device "nvmf_tgt_br2" 00:12:36.825 07:58:42 -- nvmf/common.sh@155 -- # true 00:12:36.825 07:58:42 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:36.825 07:58:42 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:36.825 Cannot find device "nvmf_tgt_br" 00:12:36.825 07:58:42 -- nvmf/common.sh@157 -- # true 00:12:36.825 07:58:42 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:36.825 Cannot find device "nvmf_tgt_br2" 00:12:36.825 07:58:42 -- nvmf/common.sh@158 -- # true 00:12:36.825 07:58:42 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:36.825 07:58:42 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:36.825 07:58:42 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:36.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.825 07:58:42 -- nvmf/common.sh@161 -- # true 00:12:36.825 07:58:42 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:36.825 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:36.825 07:58:42 -- nvmf/common.sh@162 -- # true 00:12:36.825 07:58:42 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:36.825 07:58:42 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:36.825 07:58:42 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:36.825 07:58:42 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:36.825 07:58:42 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:37.083 07:58:42 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:37.083 07:58:42 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:37.083 07:58:42 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:37.083 07:58:42 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:37.083 07:58:42 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:37.083 07:58:42 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:37.083 07:58:42 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:37.083 07:58:42 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:37.083 07:58:42 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:37.083 07:58:42 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:37.083 07:58:42 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:37.083 07:58:42 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:37.083 07:58:42 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:37.083 07:58:42 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:37.083 07:58:42 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:37.083 07:58:42 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:37.083 07:58:42 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:37.083 07:58:42 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:37.083 07:58:42 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:37.083 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.083 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:12:37.083 00:12:37.083 --- 10.0.0.2 ping statistics --- 00:12:37.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.083 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:37.083 07:58:42 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:37.083 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:37.083 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.066 ms 00:12:37.083 00:12:37.083 --- 10.0.0.3 ping statistics --- 00:12:37.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.083 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:12:37.083 07:58:42 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:37.083 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.083 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:12:37.083 00:12:37.083 --- 10.0.0.1 ping statistics --- 00:12:37.083 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.083 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:12:37.083 07:58:42 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.083 07:58:42 -- nvmf/common.sh@421 -- # return 0 00:12:37.083 07:58:42 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:37.083 07:58:42 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.083 07:58:42 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:37.083 07:58:42 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:37.083 07:58:42 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.083 07:58:42 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:37.083 07:58:42 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:37.083 07:58:42 -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:12:37.083 07:58:42 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=75300 00:12:37.083 07:58:42 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:37.083 07:58:42 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 75300 00:12:37.083 07:58:42 -- common/autotest_common.sh@819 -- # '[' -z 75300 ']' 00:12:37.083 07:58:42 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.083 07:58:42 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:37.083 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.083 07:58:42 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.083 07:58:42 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:37.083 07:58:42 -- common/autotest_common.sh@10 -- # set +x 00:12:38.458 07:58:43 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:38.458 07:58:43 -- common/autotest_common.sh@852 -- # return 0 00:12:38.458 07:58:43 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:38.458 07:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.458 07:58:43 -- common/autotest_common.sh@10 -- # set +x 00:12:38.458 07:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.458 07:58:43 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:12:38.458 07:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.458 07:58:43 -- common/autotest_common.sh@10 -- # set +x 00:12:38.458 Malloc0 00:12:38.458 07:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.458 07:58:43 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:38.458 07:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.458 07:58:43 -- common/autotest_common.sh@10 -- # set +x 00:12:38.458 07:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.458 07:58:43 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:38.458 07:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.458 07:58:43 -- common/autotest_common.sh@10 -- # set +x 00:12:38.458 07:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.458 07:58:43 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.458 07:58:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.458 07:58:43 -- common/autotest_common.sh@10 -- # set +x 00:12:38.458 07:58:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.458 07:58:43 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:12:38.458 07:58:43 -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:12:38.458 Shutting down the fuzz application 00:12:38.458 07:58:44 -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:12:38.715 Shutting down the fuzz application 00:12:38.715 07:58:44 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.715 07:58:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:38.715 07:58:44 -- common/autotest_common.sh@10 -- # set +x 00:12:38.715 07:58:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:38.715 07:58:44 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:12:38.715 07:58:44 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:12:38.715 07:58:44 -- nvmf/common.sh@476 -- # nvmfcleanup 00:12:38.715 07:58:44 -- nvmf/common.sh@116 -- # sync 00:12:38.715 07:58:44 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:12:38.715 07:58:44 -- nvmf/common.sh@119 -- # set +e 00:12:38.715 07:58:44 -- nvmf/common.sh@120 -- # for i in {1..20} 00:12:38.715 07:58:44 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:12:38.715 rmmod nvme_tcp 00:12:38.973 rmmod nvme_fabrics 00:12:38.973 rmmod nvme_keyring 00:12:38.973 07:58:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:12:38.973 07:58:44 -- nvmf/common.sh@123 -- # set -e 00:12:38.973 07:58:44 -- nvmf/common.sh@124 -- # return 0 00:12:38.973 07:58:44 -- nvmf/common.sh@477 -- # '[' -n 75300 ']' 00:12:38.973 07:58:44 -- nvmf/common.sh@478 -- # killprocess 75300 00:12:38.973 07:58:44 -- common/autotest_common.sh@926 -- # '[' -z 75300 ']' 00:12:38.973 07:58:44 -- common/autotest_common.sh@930 -- # kill -0 75300 00:12:38.973 07:58:44 -- common/autotest_common.sh@931 -- # uname 00:12:38.973 07:58:44 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:12:38.973 07:58:44 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75300 00:12:38.973 07:58:44 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:12:38.973 07:58:44 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:12:38.973 killing process with pid 75300 00:12:38.973 07:58:44 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75300' 00:12:38.973 07:58:44 -- common/autotest_common.sh@945 -- # kill 75300 00:12:38.973 07:58:44 -- common/autotest_common.sh@950 -- # wait 75300 00:12:38.973 07:58:44 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:12:38.973 07:58:44 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:12:38.973 07:58:44 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:12:38.973 07:58:44 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:38.973 07:58:44 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:12:38.973 07:58:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:38.973 07:58:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:38.973 07:58:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:38.973 07:58:44 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:12:38.973 07:58:44 -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:12:39.232 00:12:39.232 real 0m2.409s 00:12:39.232 user 0m2.509s 00:12:39.232 sys 0m0.543s 00:12:39.232 07:58:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:39.232 07:58:44 -- common/autotest_common.sh@10 -- # set +x 00:12:39.232 ************************************ 00:12:39.232 END TEST nvmf_fuzz 00:12:39.232 ************************************ 00:12:39.232 07:58:44 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:39.232 07:58:44 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:12:39.232 07:58:44 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:12:39.232 07:58:44 -- common/autotest_common.sh@10 -- # set +x 00:12:39.232 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:12:39.232 ************************************ 00:12:39.232 START TEST nvmf_multiconnection 00:12:39.232 ************************************ 00:12:39.232 07:58:44 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:12:39.232 * Looking for test storage... 00:12:39.232 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:39.232 07:58:44 -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:39.232 07:58:44 -- nvmf/common.sh@7 -- # uname -s 00:12:39.232 07:58:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.232 07:58:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.232 07:58:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.232 07:58:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.232 07:58:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.232 07:58:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.232 07:58:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.232 07:58:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.232 07:58:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.232 07:58:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.232 07:58:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:12:39.232 07:58:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:12:39.232 07:58:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.232 07:58:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.232 07:58:44 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:39.232 07:58:44 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:39.232 07:58:44 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.232 07:58:44 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.232 07:58:44 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.232 07:58:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.232 07:58:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.232 07:58:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.232 07:58:44 -- paths/export.sh@5 -- # export PATH 00:12:39.232 07:58:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.232 07:58:44 -- nvmf/common.sh@46 -- # : 0 00:12:39.232 07:58:44 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:12:39.232 07:58:44 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:12:39.232 07:58:44 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:12:39.232 07:58:44 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.232 07:58:44 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.232 07:58:44 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:12:39.232 07:58:44 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:12:39.232 07:58:44 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:12:39.232 07:58:44 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:39.232 07:58:44 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:39.232 07:58:44 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:12:39.232 07:58:44 -- target/multiconnection.sh@16 -- # nvmftestinit 00:12:39.232 07:58:44 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:12:39.232 07:58:44 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.232 07:58:44 -- nvmf/common.sh@436 -- # prepare_net_devs 00:12:39.232 07:58:44 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:12:39.232 07:58:44 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:12:39.232 07:58:44 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.232 07:58:44 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:39.232 07:58:44 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.232 07:58:44 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:12:39.232 07:58:44 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:12:39.232 07:58:44 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:12:39.232 07:58:44 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:12:39.232 07:58:44 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:12:39.232 07:58:44 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:12:39.232 07:58:44 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.232 07:58:44 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:39.232 07:58:44 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:12:39.232 07:58:44 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:12:39.232 07:58:44 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:39.232 07:58:44 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:39.232 07:58:44 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:39.232 07:58:44 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.232 07:58:44 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:39.232 07:58:44 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:39.232 07:58:44 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:39.232 07:58:44 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:39.232 07:58:44 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:12:39.232 07:58:44 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:12:39.232 Cannot find device "nvmf_tgt_br" 00:12:39.232 07:58:44 -- nvmf/common.sh@154 -- # true 00:12:39.232 07:58:44 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:12:39.232 Cannot find device "nvmf_tgt_br2" 00:12:39.232 07:58:44 -- nvmf/common.sh@155 -- # true 00:12:39.232 07:58:44 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:12:39.232 07:58:44 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:12:39.232 Cannot find device "nvmf_tgt_br" 00:12:39.232 07:58:44 -- nvmf/common.sh@157 -- # true 00:12:39.232 07:58:44 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:12:39.232 Cannot find device "nvmf_tgt_br2" 00:12:39.232 07:58:45 -- nvmf/common.sh@158 -- # true 00:12:39.232 07:58:45 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:12:39.232 07:58:45 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:12:39.490 07:58:45 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:39.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:39.490 07:58:45 -- nvmf/common.sh@161 -- # true 00:12:39.490 07:58:45 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:39.490 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:39.490 07:58:45 -- nvmf/common.sh@162 -- # true 00:12:39.490 07:58:45 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:12:39.490 07:58:45 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:39.490 07:58:45 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:39.490 07:58:45 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:39.490 07:58:45 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:39.490 07:58:45 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:39.490 07:58:45 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:39.490 07:58:45 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:12:39.491 07:58:45 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:12:39.491 07:58:45 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:12:39.491 07:58:45 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:12:39.491 07:58:45 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:12:39.491 07:58:45 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:12:39.491 07:58:45 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:39.491 07:58:45 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:39.491 07:58:45 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:39.491 07:58:45 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:12:39.491 07:58:45 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:12:39.491 07:58:45 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:12:39.491 07:58:45 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:39.491 07:58:45 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:39.491 07:58:45 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:39.491 07:58:45 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:39.491 07:58:45 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:12:39.491 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.491 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:12:39.491 00:12:39.491 --- 10.0.0.2 ping statistics --- 00:12:39.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.491 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:39.491 07:58:45 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:12:39.491 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:39.491 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:12:39.491 00:12:39.491 --- 10.0.0.3 ping statistics --- 00:12:39.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.491 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:12:39.491 07:58:45 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:39.491 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.491 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:39.491 00:12:39.491 --- 10.0.0.1 ping statistics --- 00:12:39.491 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.491 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:39.491 07:58:45 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.491 07:58:45 -- nvmf/common.sh@421 -- # return 0 00:12:39.491 07:58:45 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:12:39.491 07:58:45 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.491 07:58:45 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:12:39.491 07:58:45 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:12:39.491 07:58:45 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.491 07:58:45 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:12:39.491 07:58:45 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:12:39.491 07:58:45 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:12:39.491 07:58:45 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:12:39.491 07:58:45 -- common/autotest_common.sh@712 -- # xtrace_disable 00:12:39.491 07:58:45 -- common/autotest_common.sh@10 -- # set +x 00:12:39.491 07:58:45 -- nvmf/common.sh@469 -- # nvmfpid=75470 00:12:39.491 07:58:45 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.491 07:58:45 -- nvmf/common.sh@470 -- # waitforlisten 75470 00:12:39.491 07:58:45 -- common/autotest_common.sh@819 -- # '[' -z 75470 ']' 00:12:39.491 07:58:45 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.491 07:58:45 -- common/autotest_common.sh@824 -- # local max_retries=100 00:12:39.491 07:58:45 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.491 07:58:45 -- common/autotest_common.sh@828 -- # xtrace_disable 00:12:39.491 07:58:45 -- common/autotest_common.sh@10 -- # set +x 00:12:39.749 [2024-07-13 07:58:45.344823] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:12:39.749 [2024-07-13 07:58:45.344915] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:39.749 [2024-07-13 07:58:45.486146] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:39.749 [2024-07-13 07:58:45.528497] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:12:39.749 [2024-07-13 07:58:45.528677] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:39.749 [2024-07-13 07:58:45.528697] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:39.749 [2024-07-13 07:58:45.528707] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:39.749 [2024-07-13 07:58:45.528845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:39.749 [2024-07-13 07:58:45.530813] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:39.749 [2024-07-13 07:58:45.530895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:39.749 [2024-07-13 07:58:45.530904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.685 07:58:46 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:12:40.685 07:58:46 -- common/autotest_common.sh@852 -- # return 0 00:12:40.685 07:58:46 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:12:40.685 07:58:46 -- common/autotest_common.sh@718 -- # xtrace_disable 00:12:40.685 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.685 07:58:46 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.685 07:58:46 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:40.685 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.685 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.685 [2024-07-13 07:58:46.396473] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:40.685 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.685 07:58:46 -- target/multiconnection.sh@21 -- # seq 1 11 00:12:40.685 07:58:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.685 07:58:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:40.685 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.685 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.685 Malloc1 00:12:40.685 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.685 07:58:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:12:40.685 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.685 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.685 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.685 07:58:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.685 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.685 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.685 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.685 07:58:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.685 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.685 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.685 [2024-07-13 07:58:46.460118] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.685 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.685 07:58:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.685 07:58:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:12:40.685 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.685 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.685 Malloc2 00:12:40.685 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.685 07:58:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:40.685 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.685 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.685 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.685 07:58:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:12:40.685 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.685 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.685 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.685 07:58:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:40.685 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.685 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.945 07:58:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 Malloc3 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.945 07:58:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 Malloc4 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.945 07:58:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 Malloc5 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.945 07:58:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 Malloc6 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.945 07:58:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 Malloc7 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.945 07:58:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 Malloc8 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.945 07:58:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:40.945 07:58:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:12:40.945 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.945 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:40.945 Malloc9 00:12:40.946 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:40.946 07:58:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:12:40.946 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:40.946 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.202 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.202 07:58:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:12:41.202 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.202 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.202 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.202 07:58:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:12:41.202 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.202 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.202 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.202 07:58:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:41.202 07:58:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:12:41.202 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.202 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.202 Malloc10 00:12:41.202 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.202 07:58:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:12:41.202 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.202 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.202 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.203 07:58:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:12:41.203 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.203 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.203 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.203 07:58:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:12:41.203 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.203 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.203 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.203 07:58:46 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:41.203 07:58:46 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:12:41.203 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.203 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.203 Malloc11 00:12:41.203 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.203 07:58:46 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:12:41.203 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.203 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.203 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.203 07:58:46 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:12:41.203 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.203 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.203 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.203 07:58:46 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:12:41.203 07:58:46 -- common/autotest_common.sh@551 -- # xtrace_disable 00:12:41.203 07:58:46 -- common/autotest_common.sh@10 -- # set +x 00:12:41.203 07:58:46 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:12:41.203 07:58:46 -- target/multiconnection.sh@28 -- # seq 1 11 00:12:41.203 07:58:46 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:41.203 07:58:46 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.203 07:58:46 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:12:41.203 07:58:46 -- common/autotest_common.sh@1177 -- # local i=0 00:12:41.203 07:58:46 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.203 07:58:46 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:41.203 07:58:46 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:43.731 07:58:49 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:43.731 07:58:49 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:43.731 07:58:49 -- common/autotest_common.sh@1186 -- # grep -c SPDK1 00:12:43.731 07:58:49 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:43.731 07:58:49 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.731 07:58:49 -- common/autotest_common.sh@1187 -- # return 0 00:12:43.731 07:58:49 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:43.731 07:58:49 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:12:43.731 07:58:49 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:12:43.731 07:58:49 -- common/autotest_common.sh@1177 -- # local i=0 00:12:43.731 07:58:49 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:43.731 07:58:49 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:43.731 07:58:49 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:45.631 07:58:51 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:45.631 07:58:51 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:45.631 07:58:51 -- common/autotest_common.sh@1186 -- # grep -c SPDK2 00:12:45.631 07:58:51 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:45.631 07:58:51 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:45.631 07:58:51 -- common/autotest_common.sh@1187 -- # return 0 00:12:45.631 07:58:51 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:45.631 07:58:51 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:12:45.631 07:58:51 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:12:45.631 07:58:51 -- common/autotest_common.sh@1177 -- # local i=0 00:12:45.631 07:58:51 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.631 07:58:51 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:45.631 07:58:51 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:47.532 07:58:53 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:47.532 07:58:53 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:47.532 07:58:53 -- common/autotest_common.sh@1186 -- # grep -c SPDK3 00:12:47.532 07:58:53 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:47.532 07:58:53 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.532 07:58:53 -- common/autotest_common.sh@1187 -- # return 0 00:12:47.532 07:58:53 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:47.532 07:58:53 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:12:47.790 07:58:53 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:12:47.790 07:58:53 -- common/autotest_common.sh@1177 -- # local i=0 00:12:47.790 07:58:53 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.790 07:58:53 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:47.790 07:58:53 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:49.691 07:58:55 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:49.691 07:58:55 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:49.691 07:58:55 -- common/autotest_common.sh@1186 -- # grep -c SPDK4 00:12:49.691 07:58:55 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:49.691 07:58:55 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.691 07:58:55 -- common/autotest_common.sh@1187 -- # return 0 00:12:49.691 07:58:55 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:49.691 07:58:55 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:12:49.950 07:58:55 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:12:49.950 07:58:55 -- common/autotest_common.sh@1177 -- # local i=0 00:12:49.950 07:58:55 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.950 07:58:55 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:49.950 07:58:55 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:51.851 07:58:57 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:51.851 07:58:57 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:51.851 07:58:57 -- common/autotest_common.sh@1186 -- # grep -c SPDK5 00:12:51.851 07:58:57 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:51.851 07:58:57 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.851 07:58:57 -- common/autotest_common.sh@1187 -- # return 0 00:12:51.851 07:58:57 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:51.851 07:58:57 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:12:52.110 07:58:57 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:12:52.110 07:58:57 -- common/autotest_common.sh@1177 -- # local i=0 00:12:52.110 07:58:57 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:52.110 07:58:57 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:52.110 07:58:57 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:54.011 07:58:59 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:54.011 07:58:59 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:54.011 07:58:59 -- common/autotest_common.sh@1186 -- # grep -c SPDK6 00:12:54.011 07:58:59 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:54.011 07:58:59 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:54.011 07:58:59 -- common/autotest_common.sh@1187 -- # return 0 00:12:54.011 07:58:59 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:54.011 07:58:59 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:12:54.269 07:58:59 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:12:54.269 07:58:59 -- common/autotest_common.sh@1177 -- # local i=0 00:12:54.269 07:58:59 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:54.269 07:58:59 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:54.269 07:58:59 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:56.173 07:59:01 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:56.173 07:59:01 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:56.173 07:59:01 -- common/autotest_common.sh@1186 -- # grep -c SPDK7 00:12:56.173 07:59:01 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:56.173 07:59:01 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:56.173 07:59:01 -- common/autotest_common.sh@1187 -- # return 0 00:12:56.173 07:59:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:56.173 07:59:01 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:12:56.432 07:59:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:12:56.432 07:59:02 -- common/autotest_common.sh@1177 -- # local i=0 00:12:56.432 07:59:02 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.432 07:59:02 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:56.432 07:59:02 -- common/autotest_common.sh@1184 -- # sleep 2 00:12:58.330 07:59:04 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:12:58.330 07:59:04 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:12:58.330 07:59:04 -- common/autotest_common.sh@1186 -- # grep -c SPDK8 00:12:58.330 07:59:04 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:12:58.330 07:59:04 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.330 07:59:04 -- common/autotest_common.sh@1187 -- # return 0 00:12:58.330 07:59:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:12:58.330 07:59:04 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:12:58.589 07:59:04 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:12:58.589 07:59:04 -- common/autotest_common.sh@1177 -- # local i=0 00:12:58.589 07:59:04 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.589 07:59:04 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:12:58.589 07:59:04 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:00.502 07:59:06 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:00.502 07:59:06 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:00.502 07:59:06 -- common/autotest_common.sh@1186 -- # grep -c SPDK9 00:13:00.502 07:59:06 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:00.502 07:59:06 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:00.502 07:59:06 -- common/autotest_common.sh@1187 -- # return 0 00:13:00.502 07:59:06 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:00.502 07:59:06 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:13:00.760 07:59:06 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:13:00.760 07:59:06 -- common/autotest_common.sh@1177 -- # local i=0 00:13:00.760 07:59:06 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:00.760 07:59:06 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:00.760 07:59:06 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:02.662 07:59:08 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:02.662 07:59:08 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:02.662 07:59:08 -- common/autotest_common.sh@1186 -- # grep -c SPDK10 00:13:02.662 07:59:08 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:02.663 07:59:08 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:02.663 07:59:08 -- common/autotest_common.sh@1187 -- # return 0 00:13:02.663 07:59:08 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:02.663 07:59:08 -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:13:02.922 07:59:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:13:02.922 07:59:08 -- common/autotest_common.sh@1177 -- # local i=0 00:13:02.922 07:59:08 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:02.922 07:59:08 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:02.922 07:59:08 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:04.826 07:59:10 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:04.826 07:59:10 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:04.826 07:59:10 -- common/autotest_common.sh@1186 -- # grep -c SPDK11 00:13:04.826 07:59:10 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:04.826 07:59:10 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:04.826 07:59:10 -- common/autotest_common.sh@1187 -- # return 0 00:13:04.826 07:59:10 -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:13:05.085 [global] 00:13:05.085 thread=1 00:13:05.085 invalidate=1 00:13:05.085 rw=read 00:13:05.085 time_based=1 00:13:05.085 runtime=10 00:13:05.085 ioengine=libaio 00:13:05.085 direct=1 00:13:05.085 bs=262144 00:13:05.085 iodepth=64 00:13:05.085 norandommap=1 00:13:05.085 numjobs=1 00:13:05.085 00:13:05.085 [job0] 00:13:05.085 filename=/dev/nvme0n1 00:13:05.085 [job1] 00:13:05.085 filename=/dev/nvme10n1 00:13:05.085 [job2] 00:13:05.085 filename=/dev/nvme1n1 00:13:05.085 [job3] 00:13:05.085 filename=/dev/nvme2n1 00:13:05.085 [job4] 00:13:05.085 filename=/dev/nvme3n1 00:13:05.085 [job5] 00:13:05.085 filename=/dev/nvme4n1 00:13:05.085 [job6] 00:13:05.085 filename=/dev/nvme5n1 00:13:05.085 [job7] 00:13:05.085 filename=/dev/nvme6n1 00:13:05.085 [job8] 00:13:05.085 filename=/dev/nvme7n1 00:13:05.085 [job9] 00:13:05.085 filename=/dev/nvme8n1 00:13:05.085 [job10] 00:13:05.085 filename=/dev/nvme9n1 00:13:05.085 Could not set queue depth (nvme0n1) 00:13:05.085 Could not set queue depth (nvme10n1) 00:13:05.085 Could not set queue depth (nvme1n1) 00:13:05.085 Could not set queue depth (nvme2n1) 00:13:05.085 Could not set queue depth (nvme3n1) 00:13:05.085 Could not set queue depth (nvme4n1) 00:13:05.085 Could not set queue depth (nvme5n1) 00:13:05.085 Could not set queue depth (nvme6n1) 00:13:05.085 Could not set queue depth (nvme7n1) 00:13:05.085 Could not set queue depth (nvme8n1) 00:13:05.085 Could not set queue depth (nvme9n1) 00:13:05.344 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:05.344 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:05.344 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:05.344 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:05.344 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:05.344 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:05.344 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:05.344 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:05.344 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:05.344 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:05.344 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:05.344 fio-3.35 00:13:05.344 Starting 11 threads 00:13:17.554 00:13:17.554 job0: (groupid=0, jobs=1): err= 0: pid=75778: Sat Jul 13 07:59:21 2024 00:13:17.554 read: IOPS=527, BW=132MiB/s (138MB/s)(1332MiB/10091msec) 00:13:17.554 slat (usec): min=21, max=69596, avg=1872.12, stdev=4244.57 00:13:17.554 clat (msec): min=17, max=201, avg=119.20, stdev= 9.99 00:13:17.554 lat (msec): min=17, max=214, avg=121.07, stdev=10.40 00:13:17.554 clat percentiles (msec): 00:13:17.554 | 1.00th=[ 85], 5.00th=[ 110], 10.00th=[ 112], 20.00th=[ 114], 00:13:17.554 | 30.00th=[ 116], 40.00th=[ 117], 50.00th=[ 120], 60.00th=[ 121], 00:13:17.554 | 70.00th=[ 123], 80.00th=[ 124], 90.00th=[ 128], 95.00th=[ 132], 00:13:17.554 | 99.00th=[ 153], 99.50th=[ 159], 99.90th=[ 199], 99.95th=[ 201], 00:13:17.554 | 99.99th=[ 203] 00:13:17.554 bw ( KiB/s): min=122100, max=141312, per=6.46%, avg=134782.60, stdev=4161.61, samples=20 00:13:17.554 iops : min= 476, max= 552, avg=526.40, stdev=16.40, samples=20 00:13:17.554 lat (msec) : 20=0.04%, 50=0.17%, 100=1.14%, 250=98.65% 00:13:17.554 cpu : usr=0.28%, sys=2.28%, ctx=1318, majf=0, minf=4097 00:13:17.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:17.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:17.555 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:17.555 job1: (groupid=0, jobs=1): err= 0: pid=75779: Sat Jul 13 07:59:21 2024 00:13:17.555 read: IOPS=1007, BW=252MiB/s (264MB/s)(2523MiB/10017msec) 00:13:17.555 slat (usec): min=20, max=26110, avg=979.92, stdev=2118.67 00:13:17.555 clat (usec): min=3226, max=99865, avg=62453.92, stdev=7347.99 00:13:17.555 lat (usec): min=3859, max=99906, avg=63433.84, stdev=7372.88 00:13:17.555 clat percentiles (msec): 00:13:17.555 | 1.00th=[ 42], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 58], 00:13:17.555 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 65], 00:13:17.555 | 70.00th=[ 66], 80.00th=[ 68], 90.00th=[ 71], 95.00th=[ 73], 00:13:17.555 | 99.00th=[ 81], 99.50th=[ 87], 99.90th=[ 94], 99.95th=[ 95], 00:13:17.555 | 99.99th=[ 101] 00:13:17.555 bw ( KiB/s): min=230912, max=273920, per=12.31%, avg=256716.05, stdev=8874.37, samples=20 00:13:17.555 iops : min= 902, max= 1070, avg=1002.75, stdev=34.63, samples=20 00:13:17.555 lat (msec) : 4=0.03%, 10=0.05%, 20=0.15%, 50=2.87%, 100=96.90% 00:13:17.555 cpu : usr=0.54%, sys=3.44%, ctx=2318, majf=0, minf=4097 00:13:17.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:17.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:17.555 issued rwts: total=10092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:17.555 job2: (groupid=0, jobs=1): err= 0: pid=75780: Sat Jul 13 07:59:21 2024 00:13:17.555 read: IOPS=700, BW=175MiB/s (184MB/s)(1763MiB/10071msec) 00:13:17.555 slat (usec): min=21, max=47439, avg=1414.83, stdev=2972.91 00:13:17.555 clat (msec): min=33, max=154, avg=89.88, stdev= 7.57 00:13:17.555 lat (msec): min=38, max=154, avg=91.29, stdev= 7.64 00:13:17.555 clat percentiles (msec): 00:13:17.555 | 1.00th=[ 70], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 86], 00:13:17.555 | 30.00th=[ 87], 40.00th=[ 89], 50.00th=[ 90], 60.00th=[ 91], 00:13:17.555 | 70.00th=[ 93], 80.00th=[ 95], 90.00th=[ 97], 95.00th=[ 101], 00:13:17.555 | 99.00th=[ 111], 99.50th=[ 117], 99.90th=[ 144], 99.95th=[ 144], 00:13:17.555 | 99.99th=[ 155] 00:13:17.555 bw ( KiB/s): min=164023, max=190464, per=8.58%, avg=178806.95, stdev=5549.16, samples=20 00:13:17.555 iops : min= 640, max= 744, avg=698.40, stdev=21.76, samples=20 00:13:17.555 lat (msec) : 50=0.48%, 100=94.00%, 250=5.52% 00:13:17.555 cpu : usr=0.40%, sys=3.05%, ctx=1752, majf=0, minf=4097 00:13:17.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:17.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:17.555 issued rwts: total=7050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:17.555 job3: (groupid=0, jobs=1): err= 0: pid=75781: Sat Jul 13 07:59:21 2024 00:13:17.555 read: IOPS=700, BW=175MiB/s (184MB/s)(1763MiB/10071msec) 00:13:17.555 slat (usec): min=21, max=29926, avg=1413.57, stdev=2958.19 00:13:17.555 clat (msec): min=17, max=157, avg=89.86, stdev= 7.73 00:13:17.555 lat (msec): min=17, max=157, avg=91.28, stdev= 7.80 00:13:17.555 clat percentiles (msec): 00:13:17.555 | 1.00th=[ 62], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 86], 00:13:17.555 | 30.00th=[ 88], 40.00th=[ 89], 50.00th=[ 90], 60.00th=[ 91], 00:13:17.555 | 70.00th=[ 93], 80.00th=[ 94], 90.00th=[ 97], 95.00th=[ 101], 00:13:17.555 | 99.00th=[ 114], 99.50th=[ 122], 99.90th=[ 148], 99.95th=[ 157], 00:13:17.555 | 99.99th=[ 157] 00:13:17.555 bw ( KiB/s): min=165376, max=189952, per=8.58%, avg=178926.05, stdev=5717.54, samples=20 00:13:17.555 iops : min= 646, max= 742, avg=698.90, stdev=22.33, samples=20 00:13:17.555 lat (msec) : 20=0.01%, 50=0.26%, 100=95.15%, 250=4.58% 00:13:17.555 cpu : usr=0.39%, sys=2.46%, ctx=1718, majf=0, minf=4097 00:13:17.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:17.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:17.555 issued rwts: total=7053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:17.555 job4: (groupid=0, jobs=1): err= 0: pid=75782: Sat Jul 13 07:59:21 2024 00:13:17.555 read: IOPS=697, BW=174MiB/s (183MB/s)(1755MiB/10068msec) 00:13:17.555 slat (usec): min=21, max=39419, avg=1419.88, stdev=2957.19 00:13:17.555 clat (msec): min=53, max=157, avg=90.23, stdev= 6.56 00:13:17.555 lat (msec): min=53, max=157, avg=91.65, stdev= 6.60 00:13:17.555 clat percentiles (msec): 00:13:17.555 | 1.00th=[ 77], 5.00th=[ 82], 10.00th=[ 84], 20.00th=[ 86], 00:13:17.555 | 30.00th=[ 88], 40.00th=[ 89], 50.00th=[ 90], 60.00th=[ 92], 00:13:17.555 | 70.00th=[ 93], 80.00th=[ 94], 90.00th=[ 97], 95.00th=[ 100], 00:13:17.555 | 99.00th=[ 105], 99.50th=[ 113], 99.90th=[ 150], 99.95th=[ 157], 00:13:17.555 | 99.99th=[ 159] 00:13:17.555 bw ( KiB/s): min=165888, max=190464, per=8.54%, avg=178081.60, stdev=4814.88, samples=20 00:13:17.555 iops : min= 648, max= 744, avg=695.60, stdev=18.82, samples=20 00:13:17.555 lat (msec) : 100=96.55%, 250=3.45% 00:13:17.555 cpu : usr=0.32%, sys=2.47%, ctx=1724, majf=0, minf=4097 00:13:17.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:13:17.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:17.555 issued rwts: total=7020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:17.555 job5: (groupid=0, jobs=1): err= 0: pid=75783: Sat Jul 13 07:59:21 2024 00:13:17.555 read: IOPS=983, BW=246MiB/s (258MB/s)(2467MiB/10032msec) 00:13:17.555 slat (usec): min=16, max=76111, avg=1008.25, stdev=2322.26 00:13:17.555 clat (msec): min=30, max=122, avg=64.00, stdev= 7.66 00:13:17.555 lat (msec): min=34, max=148, avg=65.01, stdev= 7.72 00:13:17.555 clat percentiles (msec): 00:13:17.555 | 1.00th=[ 50], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 59], 00:13:17.555 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 66], 00:13:17.555 | 70.00th=[ 67], 80.00th=[ 69], 90.00th=[ 72], 95.00th=[ 75], 00:13:17.555 | 99.00th=[ 86], 99.50th=[ 102], 99.90th=[ 124], 99.95th=[ 124], 00:13:17.555 | 99.99th=[ 124] 00:13:17.555 bw ( KiB/s): min=191358, max=266240, per=12.04%, avg=251001.35, stdev=15832.71, samples=20 00:13:17.555 iops : min= 747, max= 1040, avg=980.40, stdev=61.92, samples=20 00:13:17.555 lat (msec) : 50=1.70%, 100=97.76%, 250=0.54% 00:13:17.555 cpu : usr=0.51%, sys=4.17%, ctx=2265, majf=0, minf=4097 00:13:17.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:17.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:17.555 issued rwts: total=9868,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:17.555 job6: (groupid=0, jobs=1): err= 0: pid=75784: Sat Jul 13 07:59:21 2024 00:13:17.555 read: IOPS=1000, BW=250MiB/s (262MB/s)(2504MiB/10013msec) 00:13:17.555 slat (usec): min=20, max=101086, avg=983.93, stdev=2331.42 00:13:17.555 clat (msec): min=2, max=178, avg=62.91, stdev=11.14 00:13:17.555 lat (msec): min=2, max=222, avg=63.89, stdev=11.23 00:13:17.555 clat percentiles (msec): 00:13:17.555 | 1.00th=[ 44], 5.00th=[ 53], 10.00th=[ 55], 20.00th=[ 58], 00:13:17.555 | 30.00th=[ 60], 40.00th=[ 62], 50.00th=[ 63], 60.00th=[ 65], 00:13:17.555 | 70.00th=[ 66], 80.00th=[ 68], 90.00th=[ 71], 95.00th=[ 73], 00:13:17.555 | 99.00th=[ 82], 99.50th=[ 155], 99.90th=[ 178], 99.95th=[ 178], 00:13:17.555 | 99.99th=[ 180] 00:13:17.555 bw ( KiB/s): min=198144, max=272896, per=12.22%, avg=254744.55, stdev=14757.10, samples=20 00:13:17.555 iops : min= 774, max= 1066, avg=995.00, stdev=57.60, samples=20 00:13:17.555 lat (msec) : 4=0.14%, 10=0.05%, 20=0.21%, 50=2.56%, 100=96.30% 00:13:17.555 lat (msec) : 250=0.75% 00:13:17.555 cpu : usr=0.56%, sys=3.41%, ctx=2251, majf=0, minf=4097 00:13:17.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:17.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:17.555 issued rwts: total=10016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:17.555 job7: (groupid=0, jobs=1): err= 0: pid=75785: Sat Jul 13 07:59:21 2024 00:13:17.555 read: IOPS=526, BW=132MiB/s (138MB/s)(1328MiB/10087msec) 00:13:17.555 slat (usec): min=21, max=39324, avg=1878.80, stdev=4309.70 00:13:17.555 clat (msec): min=53, max=199, avg=119.50, stdev= 8.37 00:13:17.555 lat (msec): min=53, max=199, avg=121.38, stdev= 8.81 00:13:17.555 clat percentiles (msec): 00:13:17.555 | 1.00th=[ 107], 5.00th=[ 111], 10.00th=[ 113], 20.00th=[ 115], 00:13:17.555 | 30.00th=[ 116], 40.00th=[ 117], 50.00th=[ 120], 60.00th=[ 121], 00:13:17.555 | 70.00th=[ 123], 80.00th=[ 124], 90.00th=[ 127], 95.00th=[ 131], 00:13:17.555 | 99.00th=[ 146], 99.50th=[ 159], 99.90th=[ 197], 99.95th=[ 199], 00:13:17.555 | 99.99th=[ 199] 00:13:17.555 bw ( KiB/s): min=115943, max=140288, per=6.44%, avg=134319.20, stdev=5596.19, samples=20 00:13:17.555 iops : min= 452, max= 548, avg=524.50, stdev=21.95, samples=20 00:13:17.555 lat (msec) : 100=0.60%, 250=99.40% 00:13:17.555 cpu : usr=0.34%, sys=2.23%, ctx=1354, majf=0, minf=4097 00:13:17.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:17.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:17.555 issued rwts: total=5311,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:17.555 job8: (groupid=0, jobs=1): err= 0: pid=75786: Sat Jul 13 07:59:21 2024 00:13:17.555 read: IOPS=524, BW=131MiB/s (138MB/s)(1323MiB/10086msec) 00:13:17.555 slat (usec): min=20, max=85291, avg=1885.18, stdev=4294.50 00:13:17.555 clat (msec): min=83, max=204, avg=119.92, stdev= 7.69 00:13:17.555 lat (msec): min=86, max=204, avg=121.81, stdev= 8.09 00:13:17.555 clat percentiles (msec): 00:13:17.555 | 1.00th=[ 108], 5.00th=[ 111], 10.00th=[ 113], 20.00th=[ 115], 00:13:17.555 | 30.00th=[ 117], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 121], 00:13:17.555 | 70.00th=[ 123], 80.00th=[ 124], 90.00th=[ 128], 95.00th=[ 132], 00:13:17.555 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 201], 99.95th=[ 201], 00:13:17.555 | 99.99th=[ 205] 00:13:17.555 bw ( KiB/s): min=111104, max=140288, per=6.42%, avg=133847.65, stdev=6099.96, samples=20 00:13:17.555 iops : min= 434, max= 548, avg=522.70, stdev=23.82, samples=20 00:13:17.555 lat (msec) : 100=0.32%, 250=99.68% 00:13:17.555 cpu : usr=0.32%, sys=1.76%, ctx=1353, majf=0, minf=4097 00:13:17.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:17.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:17.555 issued rwts: total=5293,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:17.555 job9: (groupid=0, jobs=1): err= 0: pid=75787: Sat Jul 13 07:59:21 2024 00:13:17.555 read: IOPS=524, BW=131MiB/s (137MB/s)(1323MiB/10094msec) 00:13:17.555 slat (usec): min=20, max=84949, avg=1886.58, stdev=4440.68 00:13:17.555 clat (msec): min=62, max=203, avg=120.08, stdev= 9.78 00:13:17.555 lat (msec): min=62, max=209, avg=121.96, stdev=10.19 00:13:17.555 clat percentiles (msec): 00:13:17.555 | 1.00th=[ 101], 5.00th=[ 111], 10.00th=[ 113], 20.00th=[ 115], 00:13:17.555 | 30.00th=[ 117], 40.00th=[ 118], 50.00th=[ 120], 60.00th=[ 122], 00:13:17.555 | 70.00th=[ 123], 80.00th=[ 125], 90.00th=[ 128], 95.00th=[ 132], 00:13:17.555 | 99.00th=[ 163], 99.50th=[ 165], 99.90th=[ 188], 99.95th=[ 188], 00:13:17.555 | 99.99th=[ 203] 00:13:17.555 bw ( KiB/s): min=119808, max=140288, per=6.42%, avg=133797.45, stdev=4992.58, samples=20 00:13:17.555 iops : min= 468, max= 548, avg=522.60, stdev=19.47, samples=20 00:13:17.555 lat (msec) : 100=1.08%, 250=98.92% 00:13:17.555 cpu : usr=0.26%, sys=1.84%, ctx=1304, majf=0, minf=4097 00:13:17.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:13:17.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:17.555 issued rwts: total=5290,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:17.555 job10: (groupid=0, jobs=1): err= 0: pid=75788: Sat Jul 13 07:59:21 2024 00:13:17.555 read: IOPS=985, BW=246MiB/s (258MB/s)(2472MiB/10030msec) 00:13:17.555 slat (usec): min=21, max=42839, avg=1007.39, stdev=2188.44 00:13:17.555 clat (msec): min=20, max=111, avg=63.85, stdev= 7.35 00:13:17.555 lat (msec): min=30, max=113, avg=64.85, stdev= 7.38 00:13:17.555 clat percentiles (msec): 00:13:17.555 | 1.00th=[ 49], 5.00th=[ 54], 10.00th=[ 56], 20.00th=[ 58], 00:13:17.555 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 66], 00:13:17.555 | 70.00th=[ 67], 80.00th=[ 69], 90.00th=[ 72], 95.00th=[ 75], 00:13:17.555 | 99.00th=[ 88], 99.50th=[ 100], 99.90th=[ 106], 99.95th=[ 106], 00:13:17.555 | 99.99th=[ 112] 00:13:17.555 bw ( KiB/s): min=200080, max=268288, per=12.06%, avg=251462.50, stdev=14287.07, samples=20 00:13:17.555 iops : min= 781, max= 1048, avg=982.20, stdev=55.87, samples=20 00:13:17.555 lat (msec) : 50=1.59%, 100=97.93%, 250=0.49% 00:13:17.555 cpu : usr=0.58%, sys=3.83%, ctx=2224, majf=0, minf=4097 00:13:17.555 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:13:17.555 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:17.555 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:17.555 issued rwts: total=9887,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:17.555 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:17.555 00:13:17.555 Run status group 0 (all jobs): 00:13:17.555 READ: bw=2036MiB/s (2135MB/s), 131MiB/s-252MiB/s (137MB/s-264MB/s), io=20.1GiB (21.6GB), run=10013-10094msec 00:13:17.555 00:13:17.556 Disk stats (read/write): 00:13:17.556 nvme0n1: ios=10551/0, merge=0/0, ticks=1227536/0, in_queue=1227536, util=98.10% 00:13:17.556 nvme10n1: ios=19609/0, merge=0/0, ticks=1207944/0, in_queue=1207944, util=98.09% 00:13:17.556 nvme1n1: ios=13988/0, merge=0/0, ticks=1230794/0, in_queue=1230794, util=98.28% 00:13:17.556 nvme2n1: ios=14006/0, merge=0/0, ticks=1230684/0, in_queue=1230684, util=98.45% 00:13:17.556 nvme3n1: ios=13953/0, merge=0/0, ticks=1231520/0, in_queue=1231520, util=98.45% 00:13:17.556 nvme4n1: ios=19173/0, merge=0/0, ticks=1209138/0, in_queue=1209138, util=98.64% 00:13:17.556 nvme5n1: ios=19443/0, merge=0/0, ticks=1206916/0, in_queue=1206916, util=98.62% 00:13:17.556 nvme6n1: ios=10505/0, merge=0/0, ticks=1225248/0, in_queue=1225248, util=98.69% 00:13:17.556 nvme7n1: ios=10481/0, merge=0/0, ticks=1227385/0, in_queue=1227385, util=98.92% 00:13:17.556 nvme8n1: ios=10462/0, merge=0/0, ticks=1225904/0, in_queue=1225904, util=99.08% 00:13:17.556 nvme9n1: ios=19694/0, merge=0/0, ticks=1238589/0, in_queue=1238589, util=99.12% 00:13:17.556 07:59:21 -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:13:17.556 [global] 00:13:17.556 thread=1 00:13:17.556 invalidate=1 00:13:17.556 rw=randwrite 00:13:17.556 time_based=1 00:13:17.556 runtime=10 00:13:17.556 ioengine=libaio 00:13:17.556 direct=1 00:13:17.556 bs=262144 00:13:17.556 iodepth=64 00:13:17.556 norandommap=1 00:13:17.556 numjobs=1 00:13:17.556 00:13:17.556 [job0] 00:13:17.556 filename=/dev/nvme0n1 00:13:17.556 [job1] 00:13:17.556 filename=/dev/nvme10n1 00:13:17.556 [job2] 00:13:17.556 filename=/dev/nvme1n1 00:13:17.556 [job3] 00:13:17.556 filename=/dev/nvme2n1 00:13:17.556 [job4] 00:13:17.556 filename=/dev/nvme3n1 00:13:17.556 [job5] 00:13:17.556 filename=/dev/nvme4n1 00:13:17.556 [job6] 00:13:17.556 filename=/dev/nvme5n1 00:13:17.556 [job7] 00:13:17.556 filename=/dev/nvme6n1 00:13:17.556 [job8] 00:13:17.556 filename=/dev/nvme7n1 00:13:17.556 [job9] 00:13:17.556 filename=/dev/nvme8n1 00:13:17.556 [job10] 00:13:17.556 filename=/dev/nvme9n1 00:13:17.556 Could not set queue depth (nvme0n1) 00:13:17.556 Could not set queue depth (nvme10n1) 00:13:17.556 Could not set queue depth (nvme1n1) 00:13:17.556 Could not set queue depth (nvme2n1) 00:13:17.556 Could not set queue depth (nvme3n1) 00:13:17.556 Could not set queue depth (nvme4n1) 00:13:17.556 Could not set queue depth (nvme5n1) 00:13:17.556 Could not set queue depth (nvme6n1) 00:13:17.556 Could not set queue depth (nvme7n1) 00:13:17.556 Could not set queue depth (nvme8n1) 00:13:17.556 Could not set queue depth (nvme9n1) 00:13:17.556 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.556 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.556 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.556 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.556 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.556 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.556 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.556 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.556 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.556 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.556 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:13:17.556 fio-3.35 00:13:17.556 Starting 11 threads 00:13:27.528 00:13:27.528 job0: (groupid=0, jobs=1): err= 0: pid=75929: Sat Jul 13 07:59:32 2024 00:13:27.528 write: IOPS=329, BW=82.4MiB/s (86.4MB/s)(837MiB/10162msec); 0 zone resets 00:13:27.528 slat (usec): min=16, max=67865, avg=2981.12, stdev=5409.86 00:13:27.528 clat (msec): min=28, max=346, avg=191.20, stdev=26.66 00:13:27.528 lat (msec): min=28, max=347, avg=194.18, stdev=26.48 00:13:27.528 clat percentiles (msec): 00:13:27.528 | 1.00th=[ 86], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 182], 00:13:27.528 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:13:27.528 | 70.00th=[ 194], 80.00th=[ 199], 90.00th=[ 207], 95.00th=[ 253], 00:13:27.528 | 99.00th=[ 275], 99.50th=[ 300], 99.90th=[ 338], 99.95th=[ 347], 00:13:27.528 | 99.99th=[ 347] 00:13:27.528 bw ( KiB/s): min=67584, max=88064, per=5.93%, avg=84102.75, stdev=6061.26, samples=20 00:13:27.528 iops : min= 264, max= 344, avg=328.50, stdev=23.75, samples=20 00:13:27.528 lat (msec) : 50=0.48%, 100=0.72%, 250=93.64%, 500=5.17% 00:13:27.528 cpu : usr=0.60%, sys=1.05%, ctx=1714, majf=0, minf=1 00:13:27.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:13:27.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:27.528 issued rwts: total=0,3348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.528 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.528 job1: (groupid=0, jobs=1): err= 0: pid=75930: Sat Jul 13 07:59:32 2024 00:13:27.528 write: IOPS=395, BW=98.8MiB/s (104MB/s)(1002MiB/10142msec); 0 zone resets 00:13:27.528 slat (usec): min=16, max=103797, avg=2425.79, stdev=4710.68 00:13:27.528 clat (msec): min=4, max=290, avg=159.43, stdev=30.99 00:13:27.528 lat (msec): min=6, max=290, avg=161.86, stdev=31.15 00:13:27.528 clat percentiles (msec): 00:13:27.528 | 1.00th=[ 53], 5.00th=[ 144], 10.00th=[ 148], 20.00th=[ 153], 00:13:27.528 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 159], 00:13:27.528 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 165], 95.00th=[ 232], 00:13:27.528 | 99.00th=[ 275], 99.50th=[ 275], 99.90th=[ 284], 99.95th=[ 284], 00:13:27.528 | 99.99th=[ 292] 00:13:27.528 bw ( KiB/s): min=55296, max=116736, per=7.12%, avg=100992.00, stdev=11442.49, samples=20 00:13:27.528 iops : min= 216, max= 456, avg=394.50, stdev=44.70, samples=20 00:13:27.528 lat (msec) : 10=0.10%, 20=0.17%, 50=0.62%, 100=1.92%, 250=92.71% 00:13:27.528 lat (msec) : 500=4.47% 00:13:27.528 cpu : usr=0.76%, sys=1.05%, ctx=5217, majf=0, minf=1 00:13:27.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:13:27.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:27.528 issued rwts: total=0,4008,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.528 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.528 job2: (groupid=0, jobs=1): err= 0: pid=75941: Sat Jul 13 07:59:32 2024 00:13:27.528 write: IOPS=326, BW=81.5MiB/s (85.5MB/s)(828MiB/10154msec); 0 zone resets 00:13:27.528 slat (usec): min=16, max=163371, avg=3017.10, stdev=6050.58 00:13:27.528 clat (msec): min=145, max=340, avg=193.12, stdev=23.41 00:13:27.528 lat (msec): min=157, max=340, avg=196.14, stdev=22.95 00:13:27.528 clat percentiles (msec): 00:13:27.528 | 1.00th=[ 171], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 182], 00:13:27.528 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:13:27.528 | 70.00th=[ 192], 80.00th=[ 197], 90.00th=[ 203], 95.00th=[ 257], 00:13:27.528 | 99.00th=[ 288], 99.50th=[ 317], 99.90th=[ 334], 99.95th=[ 342], 00:13:27.528 | 99.99th=[ 342] 00:13:27.528 bw ( KiB/s): min=45056, max=88064, per=5.86%, avg=83156.80, stdev=10471.84, samples=20 00:13:27.528 iops : min= 176, max= 344, avg=324.80, stdev=40.89, samples=20 00:13:27.528 lat (msec) : 250=94.60%, 500=5.40% 00:13:27.528 cpu : usr=0.45%, sys=0.76%, ctx=3837, majf=0, minf=1 00:13:27.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:13:27.528 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.528 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:27.528 issued rwts: total=0,3312,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.528 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.528 job3: (groupid=0, jobs=1): err= 0: pid=75943: Sat Jul 13 07:59:32 2024 00:13:27.528 write: IOPS=428, BW=107MiB/s (112MB/s)(1085MiB/10139msec); 0 zone resets 00:13:27.528 slat (usec): min=17, max=11520, avg=2298.76, stdev=4012.74 00:13:27.528 clat (msec): min=12, max=294, avg=147.15, stdev=28.02 00:13:27.528 lat (msec): min=13, max=294, avg=149.45, stdev=28.16 00:13:27.528 clat percentiles (msec): 00:13:27.528 | 1.00th=[ 61], 5.00th=[ 86], 10.00th=[ 91], 20.00th=[ 148], 00:13:27.528 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:13:27.528 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 165], 00:13:27.528 | 99.00th=[ 186], 99.50th=[ 245], 99.90th=[ 284], 99.95th=[ 284], 00:13:27.528 | 99.99th=[ 296] 00:13:27.528 bw ( KiB/s): min=98304, max=184320, per=7.72%, avg=109491.20, stdev=20775.82, samples=20 00:13:27.528 iops : min= 384, max= 720, avg=427.70, stdev=81.16, samples=20 00:13:27.528 lat (msec) : 20=0.18%, 50=0.55%, 100=12.83%, 250=86.01%, 500=0.41% 00:13:27.528 cpu : usr=0.83%, sys=1.10%, ctx=5329, majf=0, minf=1 00:13:27.528 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:13:27.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:27.529 issued rwts: total=0,4340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.529 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.529 job4: (groupid=0, jobs=1): err= 0: pid=75944: Sat Jul 13 07:59:32 2024 00:13:27.529 write: IOPS=427, BW=107MiB/s (112MB/s)(1083MiB/10134msec); 0 zone resets 00:13:27.529 slat (usec): min=16, max=16895, avg=2303.57, stdev=4018.53 00:13:27.529 clat (msec): min=18, max=287, avg=147.40, stdev=27.02 00:13:27.529 lat (msec): min=18, max=287, avg=149.71, stdev=27.14 00:13:27.529 clat percentiles (msec): 00:13:27.529 | 1.00th=[ 74], 5.00th=[ 86], 10.00th=[ 91], 20.00th=[ 148], 00:13:27.529 | 30.00th=[ 153], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 159], 00:13:27.529 | 70.00th=[ 159], 80.00th=[ 161], 90.00th=[ 163], 95.00th=[ 165], 00:13:27.529 | 99.00th=[ 180], 99.50th=[ 239], 99.90th=[ 279], 99.95th=[ 279], 00:13:27.529 | 99.99th=[ 288] 00:13:27.529 bw ( KiB/s): min=98304, max=178688, per=7.70%, avg=109260.80, stdev=19715.52, samples=20 00:13:27.529 iops : min= 384, max= 698, avg=426.80, stdev=77.01, samples=20 00:13:27.529 lat (msec) : 20=0.02%, 50=0.55%, 100=12.77%, 250=86.33%, 500=0.32% 00:13:27.529 cpu : usr=0.79%, sys=1.32%, ctx=5412, majf=0, minf=1 00:13:27.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:13:27.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:27.529 issued rwts: total=0,4331,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.529 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.529 job5: (groupid=0, jobs=1): err= 0: pid=75945: Sat Jul 13 07:59:32 2024 00:13:27.529 write: IOPS=1144, BW=286MiB/s (300MB/s)(2876MiB/10051msec); 0 zone resets 00:13:27.529 slat (usec): min=16, max=6009, avg=865.21, stdev=1451.86 00:13:27.529 clat (msec): min=8, max=102, avg=55.02, stdev= 4.46 00:13:27.529 lat (msec): min=8, max=102, avg=55.89, stdev= 4.35 00:13:27.529 clat percentiles (msec): 00:13:27.529 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 52], 20.00th=[ 53], 00:13:27.529 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 55], 60.00th=[ 56], 00:13:27.529 | 70.00th=[ 56], 80.00th=[ 57], 90.00th=[ 58], 95.00th=[ 60], 00:13:27.529 | 99.00th=[ 74], 99.50th=[ 79], 99.90th=[ 93], 99.95th=[ 101], 00:13:27.529 | 99.99th=[ 104] 00:13:27.529 bw ( KiB/s): min=263680, max=305152, per=20.64%, avg=292918.15, stdev=10744.02, samples=20 00:13:27.529 iops : min= 1030, max= 1192, avg=1144.20, stdev=41.98, samples=20 00:13:27.529 lat (msec) : 10=0.03%, 20=0.10%, 50=1.19%, 100=98.66%, 250=0.02% 00:13:27.529 cpu : usr=1.65%, sys=2.43%, ctx=12878, majf=0, minf=1 00:13:27.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:13:27.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:27.529 issued rwts: total=0,11504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.529 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.529 job6: (groupid=0, jobs=1): err= 0: pid=75946: Sat Jul 13 07:59:32 2024 00:13:27.529 write: IOPS=1138, BW=285MiB/s (299MB/s)(2862MiB/10051msec); 0 zone resets 00:13:27.529 slat (usec): min=17, max=6223, avg=868.61, stdev=1447.89 00:13:27.529 clat (msec): min=8, max=105, avg=55.31, stdev= 3.60 00:13:27.529 lat (msec): min=8, max=105, avg=56.18, stdev= 3.47 00:13:27.529 clat percentiles (msec): 00:13:27.529 | 1.00th=[ 51], 5.00th=[ 52], 10.00th=[ 53], 20.00th=[ 54], 00:13:27.529 | 30.00th=[ 54], 40.00th=[ 55], 50.00th=[ 56], 60.00th=[ 56], 00:13:27.529 | 70.00th=[ 57], 80.00th=[ 57], 90.00th=[ 58], 95.00th=[ 59], 00:13:27.529 | 99.00th=[ 64], 99.50th=[ 66], 99.90th=[ 95], 99.95th=[ 100], 00:13:27.529 | 99.99th=[ 103] 00:13:27.529 bw ( KiB/s): min=281600, max=296448, per=20.54%, avg=291430.40, stdev=3855.15, samples=20 00:13:27.529 iops : min= 1100, max= 1158, avg=1138.40, stdev=15.06, samples=20 00:13:27.529 lat (msec) : 10=0.03%, 20=0.10%, 50=0.53%, 100=99.29%, 250=0.04% 00:13:27.529 cpu : usr=1.76%, sys=2.89%, ctx=13809, majf=0, minf=1 00:13:27.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:13:27.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:27.529 issued rwts: total=0,11447,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.529 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.529 job7: (groupid=0, jobs=1): err= 0: pid=75947: Sat Jul 13 07:59:32 2024 00:13:27.529 write: IOPS=393, BW=98.3MiB/s (103MB/s)(997MiB/10137msec); 0 zone resets 00:13:27.529 slat (usec): min=19, max=141046, avg=2442.54, stdev=5043.42 00:13:27.529 clat (msec): min=7, max=319, avg=160.27, stdev=30.94 00:13:27.529 lat (msec): min=9, max=319, avg=162.71, stdev=31.06 00:13:27.529 clat percentiles (msec): 00:13:27.529 | 1.00th=[ 58], 5.00th=[ 144], 10.00th=[ 148], 20.00th=[ 153], 00:13:27.529 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 157], 60.00th=[ 159], 00:13:27.529 | 70.00th=[ 161], 80.00th=[ 163], 90.00th=[ 165], 95.00th=[ 234], 00:13:27.529 | 99.00th=[ 279], 99.50th=[ 284], 99.90th=[ 305], 99.95th=[ 321], 00:13:27.529 | 99.99th=[ 321] 00:13:27.529 bw ( KiB/s): min=47198, max=113152, per=7.08%, avg=100433.50, stdev=12937.95, samples=20 00:13:27.529 iops : min= 184, max= 442, avg=392.30, stdev=50.62, samples=20 00:13:27.529 lat (msec) : 10=0.05%, 20=0.18%, 50=0.60%, 100=1.63%, 250=92.95% 00:13:27.529 lat (msec) : 500=4.59% 00:13:27.529 cpu : usr=0.66%, sys=0.87%, ctx=5484, majf=0, minf=1 00:13:27.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:13:27.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:27.529 issued rwts: total=0,3986,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.529 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.529 job8: (groupid=0, jobs=1): err= 0: pid=75948: Sat Jul 13 07:59:32 2024 00:13:27.529 write: IOPS=328, BW=82.1MiB/s (86.1MB/s)(835MiB/10166msec); 0 zone resets 00:13:27.529 slat (usec): min=20, max=74549, avg=2991.84, stdev=5513.68 00:13:27.529 clat (msec): min=28, max=343, avg=191.73, stdev=27.03 00:13:27.529 lat (msec): min=28, max=343, avg=194.72, stdev=26.85 00:13:27.529 clat percentiles (msec): 00:13:27.529 | 1.00th=[ 86], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 182], 00:13:27.529 | 30.00th=[ 186], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 190], 00:13:27.529 | 70.00th=[ 192], 80.00th=[ 197], 90.00th=[ 209], 95.00th=[ 253], 00:13:27.529 | 99.00th=[ 275], 99.50th=[ 296], 99.90th=[ 334], 99.95th=[ 342], 00:13:27.529 | 99.99th=[ 342] 00:13:27.529 bw ( KiB/s): min=65536, max=88064, per=5.91%, avg=83865.60, stdev=6214.56, samples=20 00:13:27.529 iops : min= 256, max= 344, avg=327.60, stdev=24.28, samples=20 00:13:27.529 lat (msec) : 50=0.48%, 100=0.72%, 250=93.77%, 500=5.03% 00:13:27.529 cpu : usr=0.66%, sys=0.76%, ctx=4939, majf=0, minf=1 00:13:27.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:13:27.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:27.529 issued rwts: total=0,3340,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.529 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.529 job9: (groupid=0, jobs=1): err= 0: pid=75949: Sat Jul 13 07:59:32 2024 00:13:27.529 write: IOPS=333, BW=83.5MiB/s (87.5MB/s)(848MiB/10161msec); 0 zone resets 00:13:27.529 slat (usec): min=19, max=58708, avg=2941.04, stdev=5286.64 00:13:27.529 clat (msec): min=37, max=346, avg=188.63, stdev=24.47 00:13:27.529 lat (msec): min=37, max=346, avg=191.57, stdev=24.26 00:13:27.529 clat percentiles (msec): 00:13:27.529 | 1.00th=[ 100], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:13:27.529 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 186], 60.00th=[ 188], 00:13:27.529 | 70.00th=[ 190], 80.00th=[ 192], 90.00th=[ 197], 95.00th=[ 247], 00:13:27.529 | 99.00th=[ 266], 99.50th=[ 300], 99.90th=[ 334], 99.95th=[ 347], 00:13:27.529 | 99.99th=[ 347] 00:13:27.529 bw ( KiB/s): min=65536, max=90112, per=6.01%, avg=85255.15, stdev=5913.16, samples=20 00:13:27.529 iops : min= 256, max= 352, avg=333.00, stdev=23.17, samples=20 00:13:27.529 lat (msec) : 50=0.24%, 100=0.83%, 250=94.46%, 500=4.48% 00:13:27.529 cpu : usr=0.69%, sys=1.10%, ctx=5155, majf=0, minf=1 00:13:27.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.1% 00:13:27.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.529 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:27.529 issued rwts: total=0,3393,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.529 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.529 job10: (groupid=0, jobs=1): err= 0: pid=75950: Sat Jul 13 07:59:32 2024 00:13:27.529 write: IOPS=329, BW=82.4MiB/s (86.4MB/s)(837MiB/10163msec); 0 zone resets 00:13:27.529 slat (usec): min=16, max=73797, avg=2981.97, stdev=5504.48 00:13:27.529 clat (msec): min=75, max=347, avg=191.21, stdev=23.65 00:13:27.529 lat (msec): min=75, max=347, avg=194.19, stdev=23.31 00:13:27.529 clat percentiles (msec): 00:13:27.529 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 182], 00:13:27.529 | 30.00th=[ 184], 40.00th=[ 186], 50.00th=[ 188], 60.00th=[ 188], 00:13:27.529 | 70.00th=[ 190], 80.00th=[ 194], 90.00th=[ 199], 95.00th=[ 253], 00:13:27.529 | 99.00th=[ 275], 99.50th=[ 300], 99.90th=[ 338], 99.95th=[ 347], 00:13:27.529 | 99.99th=[ 347] 00:13:27.529 bw ( KiB/s): min=57344, max=88064, per=5.93%, avg=84096.00, stdev=8049.07, samples=20 00:13:27.529 iops : min= 224, max= 344, avg=328.50, stdev=31.44, samples=20 00:13:27.529 lat (msec) : 100=0.36%, 250=94.24%, 500=5.41% 00:13:27.529 cpu : usr=0.61%, sys=0.88%, ctx=3358, majf=0, minf=1 00:13:27.529 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:13:27.529 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.530 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:13:27.530 issued rwts: total=0,3348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.530 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.530 00:13:27.530 Run status group 0 (all jobs): 00:13:27.530 WRITE: bw=1386MiB/s (1453MB/s), 81.5MiB/s-286MiB/s (85.5MB/s-300MB/s), io=13.8GiB (14.8GB), run=10051-10166msec 00:13:27.530 00:13:27.530 Disk stats (read/write): 00:13:27.530 nvme0n1: ios=50/6570, merge=0/0, ticks=43/1210709, in_queue=1210752, util=98.01% 00:13:27.530 nvme10n1: ios=49/7884, merge=0/0, ticks=52/1214830, in_queue=1214882, util=98.06% 00:13:27.530 nvme1n1: ios=45/6484, merge=0/0, ticks=84/1209737, in_queue=1209821, util=98.22% 00:13:27.530 nvme2n1: ios=24/8552, merge=0/0, ticks=23/1213060, in_queue=1213083, util=98.12% 00:13:27.530 nvme3n1: ios=32/8529, merge=0/0, ticks=47/1212666, in_queue=1212713, util=98.40% 00:13:27.530 nvme4n1: ios=0/22849, merge=0/0, ticks=0/1215901, in_queue=1215901, util=98.22% 00:13:27.530 nvme5n1: ios=0/22753, merge=0/0, ticks=0/1218461, in_queue=1218461, util=98.44% 00:13:27.530 nvme6n1: ios=0/7843, merge=0/0, ticks=0/1213990, in_queue=1213990, util=98.50% 00:13:27.530 nvme7n1: ios=0/6550, merge=0/0, ticks=0/1211593, in_queue=1211593, util=98.78% 00:13:27.530 nvme8n1: ios=0/6660, merge=0/0, ticks=0/1211139, in_queue=1211139, util=98.88% 00:13:27.530 nvme9n1: ios=0/6570, merge=0/0, ticks=0/1211456, in_queue=1211456, util=98.99% 00:13:27.530 07:59:32 -- target/multiconnection.sh@36 -- # sync 00:13:27.530 07:59:32 -- target/multiconnection.sh@37 -- # seq 1 11 00:13:27.530 07:59:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:27.530 07:59:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:27.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.530 07:59:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:13:27.530 07:59:32 -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK1 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK1 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1210 -- # return 0 00:13:27.530 07:59:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:27.530 07:59:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.530 07:59:32 -- common/autotest_common.sh@10 -- # set +x 00:13:27.530 07:59:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.530 07:59:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:27.530 07:59:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:13:27.530 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:13:27.530 07:59:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:13:27.530 07:59:32 -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK2 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK2 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1210 -- # return 0 00:13:27.530 07:59:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:27.530 07:59:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.530 07:59:32 -- common/autotest_common.sh@10 -- # set +x 00:13:27.530 07:59:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.530 07:59:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:27.530 07:59:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:13:27.530 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:13:27.530 07:59:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:13:27.530 07:59:32 -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK3 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK3 00:13:27.530 07:59:32 -- common/autotest_common.sh@1210 -- # return 0 00:13:27.530 07:59:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:27.530 07:59:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.530 07:59:32 -- common/autotest_common.sh@10 -- # set +x 00:13:27.530 07:59:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.530 07:59:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:27.530 07:59:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:13:27.530 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:13:27.530 07:59:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:13:27.530 07:59:32 -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK4 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK4 00:13:27.530 07:59:32 -- common/autotest_common.sh@1210 -- # return 0 00:13:27.530 07:59:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:27.530 07:59:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.530 07:59:32 -- common/autotest_common.sh@10 -- # set +x 00:13:27.530 07:59:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.530 07:59:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:27.530 07:59:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:13:27.530 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:13:27.530 07:59:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:13:27.530 07:59:32 -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK5 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK5 00:13:27.530 07:59:32 -- common/autotest_common.sh@1210 -- # return 0 00:13:27.530 07:59:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:13:27.530 07:59:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.530 07:59:32 -- common/autotest_common.sh@10 -- # set +x 00:13:27.530 07:59:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.530 07:59:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:27.530 07:59:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:13:27.530 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:13:27.530 07:59:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:13:27.530 07:59:32 -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK6 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK6 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1210 -- # return 0 00:13:27.530 07:59:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:13:27.530 07:59:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.530 07:59:32 -- common/autotest_common.sh@10 -- # set +x 00:13:27.530 07:59:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.530 07:59:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:27.530 07:59:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:13:27.530 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:13:27.530 07:59:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:13:27.530 07:59:32 -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK7 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK7 00:13:27.530 07:59:32 -- common/autotest_common.sh@1210 -- # return 0 00:13:27.530 07:59:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:13:27.530 07:59:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.530 07:59:32 -- common/autotest_common.sh@10 -- # set +x 00:13:27.530 07:59:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.530 07:59:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:27.530 07:59:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:13:27.530 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:13:27.530 07:59:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:13:27.530 07:59:32 -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK8 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK8 00:13:27.530 07:59:32 -- common/autotest_common.sh@1210 -- # return 0 00:13:27.530 07:59:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:13:27.530 07:59:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.530 07:59:32 -- common/autotest_common.sh@10 -- # set +x 00:13:27.530 07:59:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.530 07:59:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:27.530 07:59:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:13:27.530 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:13:27.530 07:59:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:13:27.530 07:59:32 -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:27.530 07:59:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK9 00:13:27.530 07:59:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK9 00:13:27.531 07:59:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:27.531 07:59:32 -- common/autotest_common.sh@1210 -- # return 0 00:13:27.531 07:59:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:13:27.531 07:59:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.531 07:59:32 -- common/autotest_common.sh@10 -- # set +x 00:13:27.531 07:59:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.531 07:59:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:27.531 07:59:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:13:27.531 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:13:27.531 07:59:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:13:27.531 07:59:32 -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.531 07:59:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:27.531 07:59:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK10 00:13:27.531 07:59:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:27.531 07:59:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK10 00:13:27.531 07:59:32 -- common/autotest_common.sh@1210 -- # return 0 00:13:27.531 07:59:32 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:13:27.531 07:59:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.531 07:59:32 -- common/autotest_common.sh@10 -- # set +x 00:13:27.531 07:59:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.531 07:59:32 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:13:27.531 07:59:32 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:13:27.531 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:13:27.531 07:59:32 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:13:27.531 07:59:32 -- common/autotest_common.sh@1198 -- # local i=0 00:13:27.531 07:59:32 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:13:27.531 07:59:32 -- common/autotest_common.sh@1199 -- # grep -q -w SPDK11 00:13:27.531 07:59:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDK11 00:13:27.531 07:59:32 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:13:27.531 07:59:33 -- common/autotest_common.sh@1210 -- # return 0 00:13:27.531 07:59:33 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:13:27.531 07:59:33 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:27.531 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:13:27.531 07:59:33 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:27.531 07:59:33 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:13:27.531 07:59:33 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:13:27.531 07:59:33 -- target/multiconnection.sh@47 -- # nvmftestfini 00:13:27.531 07:59:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:13:27.531 07:59:33 -- nvmf/common.sh@116 -- # sync 00:13:27.531 07:59:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:13:27.531 07:59:33 -- nvmf/common.sh@119 -- # set +e 00:13:27.531 07:59:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:13:27.531 07:59:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:13:27.531 rmmod nvme_tcp 00:13:27.531 rmmod nvme_fabrics 00:13:27.531 rmmod nvme_keyring 00:13:27.531 07:59:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:13:27.531 07:59:33 -- nvmf/common.sh@123 -- # set -e 00:13:27.531 07:59:33 -- nvmf/common.sh@124 -- # return 0 00:13:27.531 07:59:33 -- nvmf/common.sh@477 -- # '[' -n 75470 ']' 00:13:27.531 07:59:33 -- nvmf/common.sh@478 -- # killprocess 75470 00:13:27.531 07:59:33 -- common/autotest_common.sh@926 -- # '[' -z 75470 ']' 00:13:27.531 07:59:33 -- common/autotest_common.sh@930 -- # kill -0 75470 00:13:27.531 07:59:33 -- common/autotest_common.sh@931 -- # uname 00:13:27.531 07:59:33 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:13:27.531 07:59:33 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 75470 00:13:27.531 07:59:33 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:13:27.531 07:59:33 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:13:27.531 07:59:33 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 75470' 00:13:27.531 killing process with pid 75470 00:13:27.531 07:59:33 -- common/autotest_common.sh@945 -- # kill 75470 00:13:27.531 07:59:33 -- common/autotest_common.sh@950 -- # wait 75470 00:13:27.790 07:59:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:13:27.790 07:59:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:13:27.790 07:59:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:13:27.790 07:59:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:27.790 07:59:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:13:27.790 07:59:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.790 07:59:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:27.790 07:59:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.790 07:59:33 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:13:27.790 ************************************ 00:13:27.790 END TEST nvmf_multiconnection 00:13:27.790 ************************************ 00:13:27.790 00:13:27.790 real 0m48.613s 00:13:27.790 user 2m37.549s 00:13:27.790 sys 0m36.270s 00:13:27.790 07:59:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:27.790 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:13:27.790 07:59:33 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:27.790 07:59:33 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:13:27.790 07:59:33 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:13:27.790 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:13:27.790 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:13:27.790 ************************************ 00:13:27.790 START TEST nvmf_initiator_timeout 00:13:27.790 ************************************ 00:13:27.790 07:59:33 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:13:27.790 * Looking for test storage... 00:13:27.790 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:27.790 07:59:33 -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:27.790 07:59:33 -- nvmf/common.sh@7 -- # uname -s 00:13:27.790 07:59:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.790 07:59:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.790 07:59:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.790 07:59:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.790 07:59:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.790 07:59:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.790 07:59:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.790 07:59:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.790 07:59:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.790 07:59:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.790 07:59:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:13:27.790 07:59:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:13:27.790 07:59:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.790 07:59:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.790 07:59:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:27.790 07:59:33 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:27.790 07:59:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.790 07:59:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.790 07:59:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.790 07:59:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.790 07:59:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.790 07:59:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.790 07:59:33 -- paths/export.sh@5 -- # export PATH 00:13:27.790 07:59:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.790 07:59:33 -- nvmf/common.sh@46 -- # : 0 00:13:27.790 07:59:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:13:27.790 07:59:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:13:27.790 07:59:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:13:27.790 07:59:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.790 07:59:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.790 07:59:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:13:27.790 07:59:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:13:27.790 07:59:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:13:28.049 07:59:33 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:28.049 07:59:33 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:28.049 07:59:33 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:13:28.049 07:59:33 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:13:28.049 07:59:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.049 07:59:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:13:28.049 07:59:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:13:28.049 07:59:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:13:28.049 07:59:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.049 07:59:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:28.049 07:59:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.049 07:59:33 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:13:28.049 07:59:33 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:13:28.049 07:59:33 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:13:28.049 07:59:33 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:13:28.049 07:59:33 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:13:28.049 07:59:33 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:13:28.049 07:59:33 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:28.049 07:59:33 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:28.049 07:59:33 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:13:28.049 07:59:33 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:13:28.050 07:59:33 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:28.050 07:59:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:28.050 07:59:33 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:28.050 07:59:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:28.050 07:59:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:28.050 07:59:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:28.050 07:59:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:28.050 07:59:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:28.050 07:59:33 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:13:28.050 07:59:33 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:13:28.050 Cannot find device "nvmf_tgt_br" 00:13:28.050 07:59:33 -- nvmf/common.sh@154 -- # true 00:13:28.050 07:59:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:13:28.050 Cannot find device "nvmf_tgt_br2" 00:13:28.050 07:59:33 -- nvmf/common.sh@155 -- # true 00:13:28.050 07:59:33 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:13:28.050 07:59:33 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:13:28.050 Cannot find device "nvmf_tgt_br" 00:13:28.050 07:59:33 -- nvmf/common.sh@157 -- # true 00:13:28.050 07:59:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:13:28.050 Cannot find device "nvmf_tgt_br2" 00:13:28.050 07:59:33 -- nvmf/common.sh@158 -- # true 00:13:28.050 07:59:33 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:13:28.050 07:59:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:13:28.050 07:59:33 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:28.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:28.050 07:59:33 -- nvmf/common.sh@161 -- # true 00:13:28.050 07:59:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:28.050 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:28.050 07:59:33 -- nvmf/common.sh@162 -- # true 00:13:28.050 07:59:33 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:13:28.050 07:59:33 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:28.050 07:59:33 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:28.050 07:59:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:28.050 07:59:33 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:28.050 07:59:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:28.050 07:59:33 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:28.050 07:59:33 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:13:28.050 07:59:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:13:28.050 07:59:33 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:13:28.050 07:59:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:13:28.050 07:59:33 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:13:28.050 07:59:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:13:28.050 07:59:33 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:28.050 07:59:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:28.050 07:59:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:28.050 07:59:33 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:13:28.050 07:59:33 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:13:28.050 07:59:33 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:13:28.309 07:59:33 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:28.309 07:59:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:28.309 07:59:33 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:28.309 07:59:33 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:28.309 07:59:33 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:13:28.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:28.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:13:28.309 00:13:28.309 --- 10.0.0.2 ping statistics --- 00:13:28.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.309 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:13:28.309 07:59:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:13:28.309 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:28.309 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:13:28.309 00:13:28.309 --- 10.0.0.3 ping statistics --- 00:13:28.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.309 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:13:28.309 07:59:33 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:28.309 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:28.309 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:13:28.309 00:13:28.309 --- 10.0.0.1 ping statistics --- 00:13:28.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:28.309 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:13:28.309 07:59:33 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:28.309 07:59:33 -- nvmf/common.sh@421 -- # return 0 00:13:28.309 07:59:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:13:28.309 07:59:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:28.309 07:59:33 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:13:28.309 07:59:33 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:13:28.309 07:59:33 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:28.309 07:59:33 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:13:28.309 07:59:33 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:13:28.309 07:59:33 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:13:28.309 07:59:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:13:28.309 07:59:33 -- common/autotest_common.sh@712 -- # xtrace_disable 00:13:28.309 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:13:28.309 07:59:33 -- nvmf/common.sh@469 -- # nvmfpid=76243 00:13:28.309 07:59:33 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:28.309 07:59:33 -- nvmf/common.sh@470 -- # waitforlisten 76243 00:13:28.309 07:59:33 -- common/autotest_common.sh@819 -- # '[' -z 76243 ']' 00:13:28.309 07:59:33 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:28.309 07:59:33 -- common/autotest_common.sh@824 -- # local max_retries=100 00:13:28.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:28.309 07:59:33 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:28.309 07:59:33 -- common/autotest_common.sh@828 -- # xtrace_disable 00:13:28.309 07:59:33 -- common/autotest_common.sh@10 -- # set +x 00:13:28.309 [2024-07-13 07:59:34.003572] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:13:28.309 [2024-07-13 07:59:34.003664] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:28.568 [2024-07-13 07:59:34.146665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:28.568 [2024-07-13 07:59:34.187479] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:13:28.568 [2024-07-13 07:59:34.187921] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:28.568 [2024-07-13 07:59:34.188073] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:28.568 [2024-07-13 07:59:34.188219] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:28.568 [2024-07-13 07:59:34.188545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:28.568 [2024-07-13 07:59:34.188644] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:28.568 [2024-07-13 07:59:34.188764] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:28.568 [2024-07-13 07:59:34.188767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.135 07:59:34 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:13:29.135 07:59:34 -- common/autotest_common.sh@852 -- # return 0 00:13:29.135 07:59:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:13:29.135 07:59:34 -- common/autotest_common.sh@718 -- # xtrace_disable 00:13:29.135 07:59:34 -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 07:59:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.394 07:59:34 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:29.394 07:59:34 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:29.394 07:59:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.394 07:59:34 -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 Malloc0 00:13:29.394 07:59:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.394 07:59:34 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:13:29.394 07:59:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.394 07:59:34 -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 Delay0 00:13:29.394 07:59:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.394 07:59:34 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:29.394 07:59:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.394 07:59:34 -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 [2024-07-13 07:59:35.005864] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.394 07:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.394 07:59:35 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:29.394 07:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.394 07:59:35 -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 07:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.394 07:59:35 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.394 07:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.394 07:59:35 -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 07:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.394 07:59:35 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:29.394 07:59:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:29.394 07:59:35 -- common/autotest_common.sh@10 -- # set +x 00:13:29.394 [2024-07-13 07:59:35.034012] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:29.394 07:59:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:29.394 07:59:35 -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:29.394 07:59:35 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:13:29.394 07:59:35 -- common/autotest_common.sh@1177 -- # local i=0 00:13:29.394 07:59:35 -- common/autotest_common.sh@1178 -- # local nvme_device_counter=1 nvme_devices=0 00:13:29.394 07:59:35 -- common/autotest_common.sh@1179 -- # [[ -n '' ]] 00:13:29.395 07:59:35 -- common/autotest_common.sh@1184 -- # sleep 2 00:13:31.925 07:59:37 -- common/autotest_common.sh@1185 -- # (( i++ <= 15 )) 00:13:31.925 07:59:37 -- common/autotest_common.sh@1186 -- # lsblk -l -o NAME,SERIAL 00:13:31.925 07:59:37 -- common/autotest_common.sh@1186 -- # grep -c SPDKISFASTANDAWESOME 00:13:31.925 07:59:37 -- common/autotest_common.sh@1186 -- # nvme_devices=1 00:13:31.925 07:59:37 -- common/autotest_common.sh@1187 -- # (( nvme_devices == nvme_device_counter )) 00:13:31.925 07:59:37 -- common/autotest_common.sh@1187 -- # return 0 00:13:31.925 07:59:37 -- target/initiator_timeout.sh@35 -- # fio_pid=76288 00:13:31.925 07:59:37 -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:13:31.925 07:59:37 -- target/initiator_timeout.sh@37 -- # sleep 3 00:13:31.925 [global] 00:13:31.925 thread=1 00:13:31.925 invalidate=1 00:13:31.925 rw=write 00:13:31.925 time_based=1 00:13:31.925 runtime=60 00:13:31.925 ioengine=libaio 00:13:31.925 direct=1 00:13:31.925 bs=4096 00:13:31.925 iodepth=1 00:13:31.925 norandommap=0 00:13:31.925 numjobs=1 00:13:31.925 00:13:31.925 verify_dump=1 00:13:31.925 verify_backlog=512 00:13:31.925 verify_state_save=0 00:13:31.925 do_verify=1 00:13:31.925 verify=crc32c-intel 00:13:31.925 [job0] 00:13:31.925 filename=/dev/nvme0n1 00:13:31.925 Could not set queue depth (nvme0n1) 00:13:31.925 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:13:31.925 fio-3.35 00:13:31.925 Starting 1 thread 00:13:34.451 07:59:40 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:13:34.451 07:59:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.451 07:59:40 -- common/autotest_common.sh@10 -- # set +x 00:13:34.451 true 00:13:34.451 07:59:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.451 07:59:40 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:13:34.451 07:59:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.451 07:59:40 -- common/autotest_common.sh@10 -- # set +x 00:13:34.451 true 00:13:34.451 07:59:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.451 07:59:40 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:13:34.451 07:59:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.451 07:59:40 -- common/autotest_common.sh@10 -- # set +x 00:13:34.451 true 00:13:34.451 07:59:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.451 07:59:40 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:13:34.451 07:59:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:34.451 07:59:40 -- common/autotest_common.sh@10 -- # set +x 00:13:34.451 true 00:13:34.451 07:59:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:34.451 07:59:40 -- target/initiator_timeout.sh@45 -- # sleep 3 00:13:37.780 07:59:43 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:13:37.780 07:59:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.780 07:59:43 -- common/autotest_common.sh@10 -- # set +x 00:13:37.780 true 00:13:37.780 07:59:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.780 07:59:43 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:13:37.780 07:59:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.780 07:59:43 -- common/autotest_common.sh@10 -- # set +x 00:13:37.780 true 00:13:37.780 07:59:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.780 07:59:43 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:13:37.780 07:59:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.780 07:59:43 -- common/autotest_common.sh@10 -- # set +x 00:13:37.780 true 00:13:37.780 07:59:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.780 07:59:43 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:13:37.780 07:59:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:13:37.780 07:59:43 -- common/autotest_common.sh@10 -- # set +x 00:13:37.780 true 00:13:37.780 07:59:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:13:37.780 07:59:43 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:13:37.780 07:59:43 -- target/initiator_timeout.sh@54 -- # wait 76288 00:14:33.993 00:14:33.993 job0: (groupid=0, jobs=1): err= 0: pid=76310: Sat Jul 13 08:00:37 2024 00:14:33.993 read: IOPS=785, BW=3140KiB/s (3216kB/s)(184MiB/60000msec) 00:14:33.993 slat (usec): min=10, max=527, avg=14.28, stdev= 4.79 00:14:33.993 clat (usec): min=155, max=1783, avg=206.95, stdev=24.01 00:14:33.993 lat (usec): min=168, max=1806, avg=221.23, stdev=24.87 00:14:33.993 clat percentiles (usec): 00:14:33.993 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 188], 00:14:33.993 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 210], 00:14:33.993 | 70.00th=[ 217], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 247], 00:14:33.993 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 310], 99.95th=[ 334], 00:14:33.993 | 99.99th=[ 668] 00:14:33.993 write: IOPS=789, BW=3160KiB/s (3235kB/s)(185MiB/60000msec); 0 zone resets 00:14:33.993 slat (usec): min=12, max=10087, avg=21.29, stdev=61.79 00:14:33.993 clat (usec): min=113, max=40798k, avg=1021.38, stdev=187400.18 00:14:33.993 lat (usec): min=134, max=40798k, avg=1042.67, stdev=187400.18 00:14:33.993 clat percentiles (usec): 00:14:33.993 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 137], 20.00th=[ 143], 00:14:33.993 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 159], 60.00th=[ 163], 00:14:33.993 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 198], 00:14:33.993 | 99.00th=[ 217], 99.50th=[ 225], 99.90th=[ 253], 99.95th=[ 269], 00:14:33.993 | 99.99th=[ 693] 00:14:33.993 bw ( KiB/s): min= 5584, max=12288, per=100.00%, avg=9701.05, stdev=1615.74, samples=38 00:14:33.993 iops : min= 1396, max= 3072, avg=2425.26, stdev=403.93, samples=38 00:14:33.993 lat (usec) : 250=98.10%, 500=1.88%, 750=0.01%, 1000=0.01% 00:14:33.993 lat (msec) : 2=0.01%, 4=0.01%, >=2000=0.01% 00:14:33.993 cpu : usr=0.64%, sys=2.10%, ctx=94530, majf=0, minf=2 00:14:33.993 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:14:33.993 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.993 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.993 issued rwts: total=47104,47395,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:33.993 latency : target=0, window=0, percentile=100.00%, depth=1 00:14:33.993 00:14:33.993 Run status group 0 (all jobs): 00:14:33.993 READ: bw=3140KiB/s (3216kB/s), 3140KiB/s-3140KiB/s (3216kB/s-3216kB/s), io=184MiB (193MB), run=60000-60000msec 00:14:33.993 WRITE: bw=3160KiB/s (3235kB/s), 3160KiB/s-3160KiB/s (3235kB/s-3235kB/s), io=185MiB (194MB), run=60000-60000msec 00:14:33.993 00:14:33.993 Disk stats (read/write): 00:14:33.993 nvme0n1: ios=47105/47104, merge=0/0, ticks=10057/8074, in_queue=18131, util=99.62% 00:14:33.993 08:00:37 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:33.993 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.993 08:00:37 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:33.993 08:00:37 -- common/autotest_common.sh@1198 -- # local i=0 00:14:33.993 08:00:37 -- common/autotest_common.sh@1199 -- # lsblk -o NAME,SERIAL 00:14:33.993 08:00:37 -- common/autotest_common.sh@1199 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.993 08:00:37 -- common/autotest_common.sh@1206 -- # lsblk -l -o NAME,SERIAL 00:14:33.993 08:00:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:33.993 nvmf hotplug test: fio successful as expected 00:14:33.993 08:00:37 -- common/autotest_common.sh@1210 -- # return 0 00:14:33.993 08:00:37 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:14:33.993 08:00:37 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:14:33.993 08:00:37 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:33.993 08:00:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.993 08:00:37 -- common/autotest_common.sh@10 -- # set +x 00:14:33.993 08:00:37 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.993 08:00:37 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:14:33.993 08:00:37 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:14:33.993 08:00:37 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:14:33.993 08:00:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:33.993 08:00:37 -- nvmf/common.sh@116 -- # sync 00:14:33.993 08:00:37 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:33.993 08:00:37 -- nvmf/common.sh@119 -- # set +e 00:14:33.993 08:00:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:33.993 08:00:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:33.993 rmmod nvme_tcp 00:14:33.993 rmmod nvme_fabrics 00:14:33.993 rmmod nvme_keyring 00:14:33.993 08:00:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:33.993 08:00:37 -- nvmf/common.sh@123 -- # set -e 00:14:33.993 08:00:37 -- nvmf/common.sh@124 -- # return 0 00:14:33.993 08:00:37 -- nvmf/common.sh@477 -- # '[' -n 76243 ']' 00:14:33.993 08:00:37 -- nvmf/common.sh@478 -- # killprocess 76243 00:14:33.993 08:00:37 -- common/autotest_common.sh@926 -- # '[' -z 76243 ']' 00:14:33.993 08:00:37 -- common/autotest_common.sh@930 -- # kill -0 76243 00:14:33.993 08:00:37 -- common/autotest_common.sh@931 -- # uname 00:14:33.993 08:00:37 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:33.993 08:00:37 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76243 00:14:33.993 killing process with pid 76243 00:14:33.993 08:00:37 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:33.993 08:00:37 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:33.993 08:00:37 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76243' 00:14:33.993 08:00:37 -- common/autotest_common.sh@945 -- # kill 76243 00:14:33.993 08:00:37 -- common/autotest_common.sh@950 -- # wait 76243 00:14:33.993 08:00:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:33.993 08:00:37 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:33.993 08:00:37 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:33.993 08:00:37 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.993 08:00:37 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:33.993 08:00:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.993 08:00:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.993 08:00:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.993 08:00:37 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:33.993 ************************************ 00:14:33.993 END TEST nvmf_initiator_timeout 00:14:33.993 ************************************ 00:14:33.993 00:14:33.993 real 1m4.309s 00:14:33.993 user 3m52.890s 00:14:33.993 sys 0m21.600s 00:14:33.993 08:00:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.993 08:00:37 -- common/autotest_common.sh@10 -- # set +x 00:14:33.993 08:00:37 -- nvmf/nvmf.sh@69 -- # [[ virt == phy ]] 00:14:33.993 08:00:37 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:14:33.993 08:00:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:33.993 08:00:37 -- common/autotest_common.sh@10 -- # set +x 00:14:33.993 08:00:37 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:14:33.993 08:00:37 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:33.993 08:00:37 -- common/autotest_common.sh@10 -- # set +x 00:14:33.993 08:00:37 -- nvmf/nvmf.sh@90 -- # [[ 1 -eq 0 ]] 00:14:33.993 08:00:37 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:33.993 08:00:37 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:33.993 08:00:37 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:33.993 08:00:37 -- common/autotest_common.sh@10 -- # set +x 00:14:33.993 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:14:33.993 ************************************ 00:14:33.993 START TEST nvmf_identify 00:14:33.993 ************************************ 00:14:33.993 08:00:37 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:14:33.993 * Looking for test storage... 00:14:33.993 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:33.993 08:00:37 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:33.993 08:00:37 -- nvmf/common.sh@7 -- # uname -s 00:14:33.993 08:00:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.993 08:00:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.993 08:00:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.993 08:00:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.994 08:00:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.994 08:00:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.994 08:00:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.994 08:00:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.994 08:00:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.994 08:00:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.994 08:00:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:14:33.994 08:00:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:14:33.994 08:00:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.994 08:00:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.994 08:00:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:33.994 08:00:38 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:33.994 08:00:38 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.994 08:00:38 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.994 08:00:38 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.994 08:00:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.994 08:00:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.994 08:00:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.994 08:00:38 -- paths/export.sh@5 -- # export PATH 00:14:33.994 08:00:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.994 08:00:38 -- nvmf/common.sh@46 -- # : 0 00:14:33.994 08:00:38 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:33.994 08:00:38 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:33.994 08:00:38 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:33.994 08:00:38 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.994 08:00:38 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.994 08:00:38 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:33.994 08:00:38 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:33.994 08:00:38 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:33.994 08:00:38 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:33.994 08:00:38 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:33.994 08:00:38 -- host/identify.sh@14 -- # nvmftestinit 00:14:33.994 08:00:38 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:33.994 08:00:38 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.994 08:00:38 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:33.994 08:00:38 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:33.994 08:00:38 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:33.994 08:00:38 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.994 08:00:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.994 08:00:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.994 08:00:38 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:33.994 08:00:38 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:33.994 08:00:38 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:33.994 08:00:38 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:33.994 08:00:38 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:33.994 08:00:38 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:33.994 08:00:38 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:33.994 08:00:38 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:33.994 08:00:38 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:33.994 08:00:38 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:33.994 08:00:38 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:33.994 08:00:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:33.994 08:00:38 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:33.994 08:00:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:33.994 08:00:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:33.994 08:00:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:33.994 08:00:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:33.994 08:00:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:33.994 08:00:38 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:33.994 08:00:38 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:33.994 Cannot find device "nvmf_tgt_br" 00:14:33.994 08:00:38 -- nvmf/common.sh@154 -- # true 00:14:33.994 08:00:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:33.994 Cannot find device "nvmf_tgt_br2" 00:14:33.994 08:00:38 -- nvmf/common.sh@155 -- # true 00:14:33.994 08:00:38 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:33.994 08:00:38 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:33.994 Cannot find device "nvmf_tgt_br" 00:14:33.994 08:00:38 -- nvmf/common.sh@157 -- # true 00:14:33.994 08:00:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:33.994 Cannot find device "nvmf_tgt_br2" 00:14:33.994 08:00:38 -- nvmf/common.sh@158 -- # true 00:14:33.994 08:00:38 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:33.994 08:00:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:33.994 08:00:38 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:33.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:33.994 08:00:38 -- nvmf/common.sh@161 -- # true 00:14:33.994 08:00:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:33.994 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:33.994 08:00:38 -- nvmf/common.sh@162 -- # true 00:14:33.994 08:00:38 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:33.994 08:00:38 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:33.994 08:00:38 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:33.994 08:00:38 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:33.994 08:00:38 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:33.994 08:00:38 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:33.994 08:00:38 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:33.994 08:00:38 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:33.994 08:00:38 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:33.994 08:00:38 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:33.994 08:00:38 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:33.994 08:00:38 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:33.994 08:00:38 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:33.994 08:00:38 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:33.994 08:00:38 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:33.994 08:00:38 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:33.994 08:00:38 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:33.994 08:00:38 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:33.994 08:00:38 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:33.994 08:00:38 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:33.994 08:00:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:33.994 08:00:38 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:33.994 08:00:38 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:33.994 08:00:38 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:33.994 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:33.994 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:14:33.994 00:14:33.994 --- 10.0.0.2 ping statistics --- 00:14:33.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.994 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:33.994 08:00:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:33.994 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:33.994 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:14:33.994 00:14:33.994 --- 10.0.0.3 ping statistics --- 00:14:33.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.994 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:14:33.994 08:00:38 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:33.994 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:33.994 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:14:33.994 00:14:33.994 --- 10.0.0.1 ping statistics --- 00:14:33.994 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:33.994 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:14:33.994 08:00:38 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:33.994 08:00:38 -- nvmf/common.sh@421 -- # return 0 00:14:33.994 08:00:38 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:33.995 08:00:38 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:33.995 08:00:38 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:33.995 08:00:38 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:33.995 08:00:38 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:33.995 08:00:38 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:33.995 08:00:38 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:33.995 08:00:38 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:14:33.995 08:00:38 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:33.995 08:00:38 -- common/autotest_common.sh@10 -- # set +x 00:14:33.995 08:00:38 -- host/identify.sh@19 -- # nvmfpid=76785 00:14:33.995 08:00:38 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:33.995 08:00:38 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:33.995 08:00:38 -- host/identify.sh@23 -- # waitforlisten 76785 00:14:33.995 08:00:38 -- common/autotest_common.sh@819 -- # '[' -z 76785 ']' 00:14:33.995 08:00:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:33.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:33.995 08:00:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:33.995 08:00:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:33.995 08:00:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:33.995 08:00:38 -- common/autotest_common.sh@10 -- # set +x 00:14:33.995 [2024-07-13 08:00:38.414285] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:33.995 [2024-07-13 08:00:38.414370] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:33.995 [2024-07-13 08:00:38.554117] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:33.995 [2024-07-13 08:00:38.587595] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:33.995 [2024-07-13 08:00:38.588034] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:33.995 [2024-07-13 08:00:38.588159] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:33.995 [2024-07-13 08:00:38.588337] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:33.995 [2024-07-13 08:00:38.588587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:33.995 [2024-07-13 08:00:38.588855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:33.995 [2024-07-13 08:00:38.588856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.995 [2024-07-13 08:00:38.588807] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:33.995 08:00:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:33.995 08:00:39 -- common/autotest_common.sh@852 -- # return 0 00:14:33.995 08:00:39 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:33.995 08:00:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.995 08:00:39 -- common/autotest_common.sh@10 -- # set +x 00:14:33.995 [2024-07-13 08:00:39.367622] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:33.995 08:00:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.995 08:00:39 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:14:33.995 08:00:39 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:33.995 08:00:39 -- common/autotest_common.sh@10 -- # set +x 00:14:33.995 08:00:39 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:33.995 08:00:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.995 08:00:39 -- common/autotest_common.sh@10 -- # set +x 00:14:33.995 Malloc0 00:14:33.995 08:00:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.995 08:00:39 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:33.995 08:00:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.995 08:00:39 -- common/autotest_common.sh@10 -- # set +x 00:14:33.995 08:00:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.995 08:00:39 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:14:33.995 08:00:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.995 08:00:39 -- common/autotest_common.sh@10 -- # set +x 00:14:33.995 08:00:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.995 08:00:39 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:33.995 08:00:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.995 08:00:39 -- common/autotest_common.sh@10 -- # set +x 00:14:33.995 [2024-07-13 08:00:39.466192] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:33.995 08:00:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.995 08:00:39 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:33.995 08:00:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.995 08:00:39 -- common/autotest_common.sh@10 -- # set +x 00:14:33.995 08:00:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.995 08:00:39 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:14:33.995 08:00:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:33.995 08:00:39 -- common/autotest_common.sh@10 -- # set +x 00:14:33.995 [2024-07-13 08:00:39.481956] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:14:33.995 [ 00:14:33.995 { 00:14:33.995 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:33.995 "subtype": "Discovery", 00:14:33.995 "listen_addresses": [ 00:14:33.995 { 00:14:33.995 "transport": "TCP", 00:14:33.995 "trtype": "TCP", 00:14:33.995 "adrfam": "IPv4", 00:14:33.995 "traddr": "10.0.0.2", 00:14:33.995 "trsvcid": "4420" 00:14:33.995 } 00:14:33.995 ], 00:14:33.995 "allow_any_host": true, 00:14:33.995 "hosts": [] 00:14:33.995 }, 00:14:33.995 { 00:14:33.995 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:33.995 "subtype": "NVMe", 00:14:33.995 "listen_addresses": [ 00:14:33.995 { 00:14:33.995 "transport": "TCP", 00:14:33.995 "trtype": "TCP", 00:14:33.995 "adrfam": "IPv4", 00:14:33.995 "traddr": "10.0.0.2", 00:14:33.995 "trsvcid": "4420" 00:14:33.995 } 00:14:33.995 ], 00:14:33.995 "allow_any_host": true, 00:14:33.995 "hosts": [], 00:14:33.995 "serial_number": "SPDK00000000000001", 00:14:33.995 "model_number": "SPDK bdev Controller", 00:14:33.995 "max_namespaces": 32, 00:14:33.995 "min_cntlid": 1, 00:14:33.995 "max_cntlid": 65519, 00:14:33.995 "namespaces": [ 00:14:33.995 { 00:14:33.995 "nsid": 1, 00:14:33.995 "bdev_name": "Malloc0", 00:14:33.995 "name": "Malloc0", 00:14:33.995 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:14:33.995 "eui64": "ABCDEF0123456789", 00:14:33.995 "uuid": "c0b20141-981d-4470-9e88-61b8230a43ba" 00:14:33.995 } 00:14:33.995 ] 00:14:33.995 } 00:14:33.995 ] 00:14:33.995 08:00:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:33.995 08:00:39 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:14:33.995 [2024-07-13 08:00:39.522634] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:33.995 [2024-07-13 08:00:39.522850] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76814 ] 00:14:33.995 [2024-07-13 08:00:39.659795] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:14:33.995 [2024-07-13 08:00:39.659882] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:33.995 [2024-07-13 08:00:39.659890] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:33.995 [2024-07-13 08:00:39.659901] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:33.995 [2024-07-13 08:00:39.659911] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:33.995 [2024-07-13 08:00:39.660025] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:14:33.995 [2024-07-13 08:00:39.660075] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x10076c0 0 00:14:33.995 [2024-07-13 08:00:39.672852] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:33.995 [2024-07-13 08:00:39.672876] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:33.995 [2024-07-13 08:00:39.672898] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:33.995 [2024-07-13 08:00:39.672902] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:33.995 [2024-07-13 08:00:39.672944] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.995 [2024-07-13 08:00:39.672952] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.995 [2024-07-13 08:00:39.672956] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10076c0) 00:14:33.995 [2024-07-13 08:00:39.672984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:33.995 [2024-07-13 08:00:39.673014] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103df60, cid 0, qid 0 00:14:33.995 [2024-07-13 08:00:39.680836] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.995 [2024-07-13 08:00:39.680867] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.995 [2024-07-13 08:00:39.680873] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.995 [2024-07-13 08:00:39.680894] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103df60) on tqpair=0x10076c0 00:14:33.995 [2024-07-13 08:00:39.680909] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:33.995 [2024-07-13 08:00:39.680917] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:14:33.995 [2024-07-13 08:00:39.680923] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:14:33.996 [2024-07-13 08:00:39.680938] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.680943] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.680947] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10076c0) 00:14:33.996 [2024-07-13 08:00:39.680957] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.996 [2024-07-13 08:00:39.680982] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103df60, cid 0, qid 0 00:14:33.996 [2024-07-13 08:00:39.681039] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.996 [2024-07-13 08:00:39.681046] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.996 [2024-07-13 08:00:39.681050] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681054] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103df60) on tqpair=0x10076c0 00:14:33.996 [2024-07-13 08:00:39.681061] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:14:33.996 [2024-07-13 08:00:39.681069] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:14:33.996 [2024-07-13 08:00:39.681076] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681080] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681084] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10076c0) 00:14:33.996 [2024-07-13 08:00:39.681092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.996 [2024-07-13 08:00:39.681109] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103df60, cid 0, qid 0 00:14:33.996 [2024-07-13 08:00:39.681187] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.996 [2024-07-13 08:00:39.681195] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.996 [2024-07-13 08:00:39.681199] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681203] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103df60) on tqpair=0x10076c0 00:14:33.996 [2024-07-13 08:00:39.681210] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:14:33.996 [2024-07-13 08:00:39.681219] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:14:33.996 [2024-07-13 08:00:39.681227] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681231] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681235] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10076c0) 00:14:33.996 [2024-07-13 08:00:39.681243] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.996 [2024-07-13 08:00:39.681261] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103df60, cid 0, qid 0 00:14:33.996 [2024-07-13 08:00:39.681307] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.996 [2024-07-13 08:00:39.681314] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.996 [2024-07-13 08:00:39.681318] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681322] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103df60) on tqpair=0x10076c0 00:14:33.996 [2024-07-13 08:00:39.681330] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:33.996 [2024-07-13 08:00:39.681340] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681345] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681349] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10076c0) 00:14:33.996 [2024-07-13 08:00:39.681357] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.996 [2024-07-13 08:00:39.681374] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103df60, cid 0, qid 0 00:14:33.996 [2024-07-13 08:00:39.681416] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.996 [2024-07-13 08:00:39.681423] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.996 [2024-07-13 08:00:39.681427] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681431] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103df60) on tqpair=0x10076c0 00:14:33.996 [2024-07-13 08:00:39.681437] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:14:33.996 [2024-07-13 08:00:39.681443] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:14:33.996 [2024-07-13 08:00:39.681451] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:33.996 [2024-07-13 08:00:39.681557] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:14:33.996 [2024-07-13 08:00:39.681563] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:33.996 [2024-07-13 08:00:39.681573] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681577] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681581] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10076c0) 00:14:33.996 [2024-07-13 08:00:39.681589] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.996 [2024-07-13 08:00:39.681608] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103df60, cid 0, qid 0 00:14:33.996 [2024-07-13 08:00:39.681657] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.996 [2024-07-13 08:00:39.681664] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.996 [2024-07-13 08:00:39.681668] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681672] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103df60) on tqpair=0x10076c0 00:14:33.996 [2024-07-13 08:00:39.681679] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:33.996 [2024-07-13 08:00:39.681690] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681695] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681699] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10076c0) 00:14:33.996 [2024-07-13 08:00:39.681706] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.996 [2024-07-13 08:00:39.681724] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103df60, cid 0, qid 0 00:14:33.996 [2024-07-13 08:00:39.681776] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.996 [2024-07-13 08:00:39.681783] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.996 [2024-07-13 08:00:39.681787] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681791] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103df60) on tqpair=0x10076c0 00:14:33.996 [2024-07-13 08:00:39.681797] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:33.996 [2024-07-13 08:00:39.681803] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:14:33.996 [2024-07-13 08:00:39.681811] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:14:33.996 [2024-07-13 08:00:39.681837] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:14:33.996 [2024-07-13 08:00:39.681850] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681854] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681858] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10076c0) 00:14:33.996 [2024-07-13 08:00:39.681867] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.996 [2024-07-13 08:00:39.681888] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103df60, cid 0, qid 0 00:14:33.996 [2024-07-13 08:00:39.681977] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:33.996 [2024-07-13 08:00:39.681985] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:33.996 [2024-07-13 08:00:39.681990] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.681994] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10076c0): datao=0, datal=4096, cccid=0 00:14:33.996 [2024-07-13 08:00:39.682000] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x103df60) on tqpair(0x10076c0): expected_datao=0, payload_size=4096 00:14:33.996 [2024-07-13 08:00:39.682009] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.682014] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.682023] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.996 [2024-07-13 08:00:39.682030] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.996 [2024-07-13 08:00:39.682034] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.996 [2024-07-13 08:00:39.682038] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103df60) on tqpair=0x10076c0 00:14:33.996 [2024-07-13 08:00:39.682047] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:14:33.996 [2024-07-13 08:00:39.682053] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:14:33.996 [2024-07-13 08:00:39.682067] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:14:33.996 [2024-07-13 08:00:39.682073] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:14:33.996 [2024-07-13 08:00:39.682078] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:14:33.996 [2024-07-13 08:00:39.682084] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:14:33.997 [2024-07-13 08:00:39.682098] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:14:33.997 [2024-07-13 08:00:39.682106] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682111] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682115] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10076c0) 00:14:33.997 [2024-07-13 08:00:39.682123] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:33.997 [2024-07-13 08:00:39.682144] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103df60, cid 0, qid 0 00:14:33.997 [2024-07-13 08:00:39.682206] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.997 [2024-07-13 08:00:39.682213] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.997 [2024-07-13 08:00:39.682217] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682222] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103df60) on tqpair=0x10076c0 00:14:33.997 [2024-07-13 08:00:39.682231] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682235] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682239] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x10076c0) 00:14:33.997 [2024-07-13 08:00:39.682246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.997 [2024-07-13 08:00:39.682253] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682257] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682261] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x10076c0) 00:14:33.997 [2024-07-13 08:00:39.682267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.997 [2024-07-13 08:00:39.682274] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682278] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682282] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x10076c0) 00:14:33.997 [2024-07-13 08:00:39.682288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.997 [2024-07-13 08:00:39.682295] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682299] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682303] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10076c0) 00:14:33.997 [2024-07-13 08:00:39.682309] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.997 [2024-07-13 08:00:39.682314] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:14:33.997 [2024-07-13 08:00:39.682327] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:33.997 [2024-07-13 08:00:39.682335] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682339] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682343] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10076c0) 00:14:33.997 [2024-07-13 08:00:39.682350] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.997 [2024-07-13 08:00:39.682370] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103df60, cid 0, qid 0 00:14:33.997 [2024-07-13 08:00:39.682378] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e0c0, cid 1, qid 0 00:14:33.997 [2024-07-13 08:00:39.682383] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e220, cid 2, qid 0 00:14:33.997 [2024-07-13 08:00:39.682389] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e380, cid 3, qid 0 00:14:33.997 [2024-07-13 08:00:39.682394] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e4e0, cid 4, qid 0 00:14:33.997 [2024-07-13 08:00:39.682484] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.997 [2024-07-13 08:00:39.682492] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.997 [2024-07-13 08:00:39.682496] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682500] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e4e0) on tqpair=0x10076c0 00:14:33.997 [2024-07-13 08:00:39.682507] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:14:33.997 [2024-07-13 08:00:39.682513] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:14:33.997 [2024-07-13 08:00:39.682524] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682529] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682533] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10076c0) 00:14:33.997 [2024-07-13 08:00:39.682541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.997 [2024-07-13 08:00:39.682559] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e4e0, cid 4, qid 0 00:14:33.997 [2024-07-13 08:00:39.682631] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:33.997 [2024-07-13 08:00:39.682638] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:33.997 [2024-07-13 08:00:39.682642] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682646] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10076c0): datao=0, datal=4096, cccid=4 00:14:33.997 [2024-07-13 08:00:39.682651] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x103e4e0) on tqpair(0x10076c0): expected_datao=0, payload_size=4096 00:14:33.997 [2024-07-13 08:00:39.682659] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682663] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682672] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.997 [2024-07-13 08:00:39.682678] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.997 [2024-07-13 08:00:39.682682] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682686] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e4e0) on tqpair=0x10076c0 00:14:33.997 [2024-07-13 08:00:39.682699] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:14:33.997 [2024-07-13 08:00:39.682723] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682728] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682732] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10076c0) 00:14:33.997 [2024-07-13 08:00:39.682740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.997 [2024-07-13 08:00:39.682748] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682752] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682756] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x10076c0) 00:14:33.997 [2024-07-13 08:00:39.682762] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.997 [2024-07-13 08:00:39.682786] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e4e0, cid 4, qid 0 00:14:33.997 [2024-07-13 08:00:39.682806] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e640, cid 5, qid 0 00:14:33.997 [2024-07-13 08:00:39.682907] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:33.997 [2024-07-13 08:00:39.682915] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:33.997 [2024-07-13 08:00:39.682919] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682923] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10076c0): datao=0, datal=1024, cccid=4 00:14:33.997 [2024-07-13 08:00:39.682928] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x103e4e0) on tqpair(0x10076c0): expected_datao=0, payload_size=1024 00:14:33.997 [2024-07-13 08:00:39.682935] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682940] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682946] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.997 [2024-07-13 08:00:39.682952] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.997 [2024-07-13 08:00:39.682956] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682960] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e640) on tqpair=0x10076c0 00:14:33.997 [2024-07-13 08:00:39.682979] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.997 [2024-07-13 08:00:39.682987] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.997 [2024-07-13 08:00:39.682991] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.682995] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e4e0) on tqpair=0x10076c0 00:14:33.997 [2024-07-13 08:00:39.683012] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.683017] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.683021] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10076c0) 00:14:33.997 [2024-07-13 08:00:39.683029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.997 [2024-07-13 08:00:39.683053] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e4e0, cid 4, qid 0 00:14:33.997 [2024-07-13 08:00:39.683121] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:33.997 [2024-07-13 08:00:39.683128] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:33.997 [2024-07-13 08:00:39.683132] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:33.997 [2024-07-13 08:00:39.683136] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10076c0): datao=0, datal=3072, cccid=4 00:14:33.997 [2024-07-13 08:00:39.683141] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x103e4e0) on tqpair(0x10076c0): expected_datao=0, payload_size=3072 00:14:33.998 [2024-07-13 08:00:39.683149] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:33.998 [2024-07-13 08:00:39.683153] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:33.998 [2024-07-13 08:00:39.683161] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.998 [2024-07-13 08:00:39.683167] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.998 [2024-07-13 08:00:39.683171] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.998 [2024-07-13 08:00:39.683175] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e4e0) on tqpair=0x10076c0 00:14:33.998 [2024-07-13 08:00:39.683185] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.998 [2024-07-13 08:00:39.683190] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.998 [2024-07-13 08:00:39.683194] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x10076c0) 00:14:33.998 [2024-07-13 08:00:39.683201] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.998 [2024-07-13 08:00:39.683224] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e4e0, cid 4, qid 0 00:14:33.998 [2024-07-13 08:00:39.683288] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:33.998 [2024-07-13 08:00:39.683295] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:33.998 [2024-07-13 08:00:39.683299] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:33.998 [2024-07-13 08:00:39.683303] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x10076c0): datao=0, datal=8, cccid=4 00:14:33.998 [2024-07-13 08:00:39.683308] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x103e4e0) on tqpair(0x10076c0): expected_datao=0, payload_size=8 00:14:33.998 [2024-07-13 08:00:39.683315] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:33.998 [2024-07-13 08:00:39.683319] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:33.998 [2024-07-13 08:00:39.683350] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.998 [2024-07-13 08:00:39.683357] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.998 [2024-07-13 08:00:39.683362] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.998 [2024-07-13 08:00:39.683366] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e4e0) on tqpair=0x10076c0 00:14:33.998 ===================================================== 00:14:33.998 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:14:33.998 ===================================================== 00:14:33.998 Controller Capabilities/Features 00:14:33.998 ================================ 00:14:33.998 Vendor ID: 0000 00:14:33.998 Subsystem Vendor ID: 0000 00:14:33.998 Serial Number: .................... 00:14:33.998 Model Number: ........................................ 00:14:33.998 Firmware Version: 24.01.1 00:14:33.998 Recommended Arb Burst: 0 00:14:33.998 IEEE OUI Identifier: 00 00 00 00:14:33.998 Multi-path I/O 00:14:33.998 May have multiple subsystem ports: No 00:14:33.998 May have multiple controllers: No 00:14:33.998 Associated with SR-IOV VF: No 00:14:33.998 Max Data Transfer Size: 131072 00:14:33.998 Max Number of Namespaces: 0 00:14:33.998 Max Number of I/O Queues: 1024 00:14:33.998 NVMe Specification Version (VS): 1.3 00:14:33.998 NVMe Specification Version (Identify): 1.3 00:14:33.998 Maximum Queue Entries: 128 00:14:33.998 Contiguous Queues Required: Yes 00:14:33.998 Arbitration Mechanisms Supported 00:14:33.998 Weighted Round Robin: Not Supported 00:14:33.998 Vendor Specific: Not Supported 00:14:33.998 Reset Timeout: 15000 ms 00:14:33.998 Doorbell Stride: 4 bytes 00:14:33.998 NVM Subsystem Reset: Not Supported 00:14:33.998 Command Sets Supported 00:14:33.998 NVM Command Set: Supported 00:14:33.998 Boot Partition: Not Supported 00:14:33.998 Memory Page Size Minimum: 4096 bytes 00:14:33.998 Memory Page Size Maximum: 4096 bytes 00:14:33.998 Persistent Memory Region: Not Supported 00:14:33.998 Optional Asynchronous Events Supported 00:14:33.998 Namespace Attribute Notices: Not Supported 00:14:33.998 Firmware Activation Notices: Not Supported 00:14:33.998 ANA Change Notices: Not Supported 00:14:33.998 PLE Aggregate Log Change Notices: Not Supported 00:14:33.998 LBA Status Info Alert Notices: Not Supported 00:14:33.998 EGE Aggregate Log Change Notices: Not Supported 00:14:33.998 Normal NVM Subsystem Shutdown event: Not Supported 00:14:33.998 Zone Descriptor Change Notices: Not Supported 00:14:33.998 Discovery Log Change Notices: Supported 00:14:33.998 Controller Attributes 00:14:33.998 128-bit Host Identifier: Not Supported 00:14:33.998 Non-Operational Permissive Mode: Not Supported 00:14:33.998 NVM Sets: Not Supported 00:14:33.998 Read Recovery Levels: Not Supported 00:14:33.998 Endurance Groups: Not Supported 00:14:33.998 Predictable Latency Mode: Not Supported 00:14:33.998 Traffic Based Keep ALive: Not Supported 00:14:33.998 Namespace Granularity: Not Supported 00:14:33.998 SQ Associations: Not Supported 00:14:33.998 UUID List: Not Supported 00:14:33.998 Multi-Domain Subsystem: Not Supported 00:14:33.998 Fixed Capacity Management: Not Supported 00:14:33.998 Variable Capacity Management: Not Supported 00:14:33.998 Delete Endurance Group: Not Supported 00:14:33.998 Delete NVM Set: Not Supported 00:14:33.998 Extended LBA Formats Supported: Not Supported 00:14:33.998 Flexible Data Placement Supported: Not Supported 00:14:33.998 00:14:33.998 Controller Memory Buffer Support 00:14:33.998 ================================ 00:14:33.998 Supported: No 00:14:33.998 00:14:33.998 Persistent Memory Region Support 00:14:33.998 ================================ 00:14:33.998 Supported: No 00:14:33.998 00:14:33.998 Admin Command Set Attributes 00:14:33.998 ============================ 00:14:33.998 Security Send/Receive: Not Supported 00:14:33.998 Format NVM: Not Supported 00:14:33.998 Firmware Activate/Download: Not Supported 00:14:33.998 Namespace Management: Not Supported 00:14:33.998 Device Self-Test: Not Supported 00:14:33.998 Directives: Not Supported 00:14:33.998 NVMe-MI: Not Supported 00:14:33.998 Virtualization Management: Not Supported 00:14:33.998 Doorbell Buffer Config: Not Supported 00:14:33.998 Get LBA Status Capability: Not Supported 00:14:33.998 Command & Feature Lockdown Capability: Not Supported 00:14:33.998 Abort Command Limit: 1 00:14:33.998 Async Event Request Limit: 4 00:14:33.998 Number of Firmware Slots: N/A 00:14:33.998 Firmware Slot 1 Read-Only: N/A 00:14:33.998 Firmware Activation Without Reset: N/A 00:14:33.998 Multiple Update Detection Support: N/A 00:14:33.998 Firmware Update Granularity: No Information Provided 00:14:33.998 Per-Namespace SMART Log: No 00:14:33.998 Asymmetric Namespace Access Log Page: Not Supported 00:14:33.998 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:14:33.998 Command Effects Log Page: Not Supported 00:14:33.998 Get Log Page Extended Data: Supported 00:14:33.998 Telemetry Log Pages: Not Supported 00:14:33.998 Persistent Event Log Pages: Not Supported 00:14:33.998 Supported Log Pages Log Page: May Support 00:14:33.999 Commands Supported & Effects Log Page: Not Supported 00:14:33.999 Feature Identifiers & Effects Log Page:May Support 00:14:33.999 NVMe-MI Commands & Effects Log Page: May Support 00:14:33.999 Data Area 4 for Telemetry Log: Not Supported 00:14:33.999 Error Log Page Entries Supported: 128 00:14:33.999 Keep Alive: Not Supported 00:14:33.999 00:14:33.999 NVM Command Set Attributes 00:14:33.999 ========================== 00:14:33.999 Submission Queue Entry Size 00:14:33.999 Max: 1 00:14:33.999 Min: 1 00:14:33.999 Completion Queue Entry Size 00:14:33.999 Max: 1 00:14:33.999 Min: 1 00:14:33.999 Number of Namespaces: 0 00:14:33.999 Compare Command: Not Supported 00:14:33.999 Write Uncorrectable Command: Not Supported 00:14:33.999 Dataset Management Command: Not Supported 00:14:33.999 Write Zeroes Command: Not Supported 00:14:33.999 Set Features Save Field: Not Supported 00:14:33.999 Reservations: Not Supported 00:14:33.999 Timestamp: Not Supported 00:14:33.999 Copy: Not Supported 00:14:33.999 Volatile Write Cache: Not Present 00:14:33.999 Atomic Write Unit (Normal): 1 00:14:33.999 Atomic Write Unit (PFail): 1 00:14:33.999 Atomic Compare & Write Unit: 1 00:14:33.999 Fused Compare & Write: Supported 00:14:33.999 Scatter-Gather List 00:14:33.999 SGL Command Set: Supported 00:14:33.999 SGL Keyed: Supported 00:14:33.999 SGL Bit Bucket Descriptor: Not Supported 00:14:33.999 SGL Metadata Pointer: Not Supported 00:14:33.999 Oversized SGL: Not Supported 00:14:33.999 SGL Metadata Address: Not Supported 00:14:33.999 SGL Offset: Supported 00:14:33.999 Transport SGL Data Block: Not Supported 00:14:33.999 Replay Protected Memory Block: Not Supported 00:14:33.999 00:14:33.999 Firmware Slot Information 00:14:33.999 ========================= 00:14:33.999 Active slot: 0 00:14:33.999 00:14:33.999 00:14:33.999 Error Log 00:14:33.999 ========= 00:14:33.999 00:14:33.999 Active Namespaces 00:14:33.999 ================= 00:14:33.999 Discovery Log Page 00:14:33.999 ================== 00:14:33.999 Generation Counter: 2 00:14:33.999 Number of Records: 2 00:14:33.999 Record Format: 0 00:14:33.999 00:14:33.999 Discovery Log Entry 0 00:14:33.999 ---------------------- 00:14:33.999 Transport Type: 3 (TCP) 00:14:33.999 Address Family: 1 (IPv4) 00:14:33.999 Subsystem Type: 3 (Current Discovery Subsystem) 00:14:33.999 Entry Flags: 00:14:33.999 Duplicate Returned Information: 1 00:14:33.999 Explicit Persistent Connection Support for Discovery: 1 00:14:33.999 Transport Requirements: 00:14:33.999 Secure Channel: Not Required 00:14:33.999 Port ID: 0 (0x0000) 00:14:33.999 Controller ID: 65535 (0xffff) 00:14:33.999 Admin Max SQ Size: 128 00:14:33.999 Transport Service Identifier: 4420 00:14:33.999 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:14:33.999 Transport Address: 10.0.0.2 00:14:33.999 Discovery Log Entry 1 00:14:33.999 ---------------------- 00:14:33.999 Transport Type: 3 (TCP) 00:14:33.999 Address Family: 1 (IPv4) 00:14:33.999 Subsystem Type: 2 (NVM Subsystem) 00:14:33.999 Entry Flags: 00:14:33.999 Duplicate Returned Information: 0 00:14:33.999 Explicit Persistent Connection Support for Discovery: 0 00:14:33.999 Transport Requirements: 00:14:33.999 Secure Channel: Not Required 00:14:33.999 Port ID: 0 (0x0000) 00:14:33.999 Controller ID: 65535 (0xffff) 00:14:33.999 Admin Max SQ Size: 128 00:14:33.999 Transport Service Identifier: 4420 00:14:33.999 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:14:33.999 Transport Address: 10.0.0.2 [2024-07-13 08:00:39.683488] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:14:33.999 [2024-07-13 08:00:39.683508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.999 [2024-07-13 08:00:39.683517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.999 [2024-07-13 08:00:39.683523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.999 [2024-07-13 08:00:39.683530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.999 [2024-07-13 08:00:39.683540] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.999 [2024-07-13 08:00:39.683545] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.999 [2024-07-13 08:00:39.683549] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10076c0) 00:14:33.999 [2024-07-13 08:00:39.683558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.999 [2024-07-13 08:00:39.683584] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e380, cid 3, qid 0 00:14:33.999 [2024-07-13 08:00:39.683644] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.999 [2024-07-13 08:00:39.683651] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.999 [2024-07-13 08:00:39.683655] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.999 [2024-07-13 08:00:39.683660] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e380) on tqpair=0x10076c0 00:14:33.999 [2024-07-13 08:00:39.683669] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.999 [2024-07-13 08:00:39.683673] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.999 [2024-07-13 08:00:39.683678] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10076c0) 00:14:33.999 [2024-07-13 08:00:39.683685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.999 [2024-07-13 08:00:39.683708] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e380, cid 3, qid 0 00:14:33.999 [2024-07-13 08:00:39.683786] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.999 [2024-07-13 08:00:39.683795] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.999 [2024-07-13 08:00:39.683799] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.999 [2024-07-13 08:00:39.683803] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e380) on tqpair=0x10076c0 00:14:33.999 [2024-07-13 08:00:39.683810] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:14:33.999 [2024-07-13 08:00:39.683815] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:14:33.999 [2024-07-13 08:00:39.683826] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.999 [2024-07-13 08:00:39.683832] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.999 [2024-07-13 08:00:39.683836] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10076c0) 00:14:33.999 [2024-07-13 08:00:39.683844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.999 [2024-07-13 08:00:39.683865] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e380, cid 3, qid 0 00:14:33.999 [2024-07-13 08:00:39.683915] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.999 [2024-07-13 08:00:39.683922] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.999 [2024-07-13 08:00:39.683926] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.999 [2024-07-13 08:00:39.683931] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e380) on tqpair=0x10076c0 00:14:33.999 [2024-07-13 08:00:39.683943] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.999 [2024-07-13 08:00:39.683948] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.999 [2024-07-13 08:00:39.683952] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10076c0) 00:14:33.999 [2024-07-13 08:00:39.683960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.999 [2024-07-13 08:00:39.683978] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e380, cid 3, qid 0 00:14:33.999 [2024-07-13 08:00:39.684024] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.999 [2024-07-13 08:00:39.684031] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.999 [2024-07-13 08:00:39.684035] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.999 [2024-07-13 08:00:39.684040] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e380) on tqpair=0x10076c0 00:14:33.999 [2024-07-13 08:00:39.684051] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:33.999 [2024-07-13 08:00:39.684056] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:33.999 [2024-07-13 08:00:39.684060] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10076c0) 00:14:33.999 [2024-07-13 08:00:39.684067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.999 [2024-07-13 08:00:39.684085] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e380, cid 3, qid 0 00:14:33.999 [2024-07-13 08:00:39.684131] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:33.999 [2024-07-13 08:00:39.684138] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:33.999 [2024-07-13 08:00:39.684142] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:33.999 [2024-07-13 08:00:39.684146] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e380) on tqpair=0x10076c0 00:14:33.999 [2024-07-13 08:00:39.684158] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684163] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684167] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10076c0) 00:14:34.000 [2024-07-13 08:00:39.684174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.000 [2024-07-13 08:00:39.684191] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e380, cid 3, qid 0 00:14:34.000 [2024-07-13 08:00:39.684237] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.000 [2024-07-13 08:00:39.684244] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.000 [2024-07-13 08:00:39.684249] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684253] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e380) on tqpair=0x10076c0 00:14:34.000 [2024-07-13 08:00:39.684264] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684269] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684273] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10076c0) 00:14:34.000 [2024-07-13 08:00:39.684280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.000 [2024-07-13 08:00:39.684298] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e380, cid 3, qid 0 00:14:34.000 [2024-07-13 08:00:39.684344] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.000 [2024-07-13 08:00:39.684352] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.000 [2024-07-13 08:00:39.684356] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684360] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e380) on tqpair=0x10076c0 00:14:34.000 [2024-07-13 08:00:39.684371] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684376] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684380] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10076c0) 00:14:34.000 [2024-07-13 08:00:39.684387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.000 [2024-07-13 08:00:39.684405] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e380, cid 3, qid 0 00:14:34.000 [2024-07-13 08:00:39.684451] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.000 [2024-07-13 08:00:39.684458] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.000 [2024-07-13 08:00:39.684462] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684466] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e380) on tqpair=0x10076c0 00:14:34.000 [2024-07-13 08:00:39.684477] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684482] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684486] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10076c0) 00:14:34.000 [2024-07-13 08:00:39.684494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.000 [2024-07-13 08:00:39.684511] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e380, cid 3, qid 0 00:14:34.000 [2024-07-13 08:00:39.684560] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.000 [2024-07-13 08:00:39.684567] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.000 [2024-07-13 08:00:39.684572] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684576] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e380) on tqpair=0x10076c0 00:14:34.000 [2024-07-13 08:00:39.684587] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684592] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684596] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10076c0) 00:14:34.000 [2024-07-13 08:00:39.684603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.000 [2024-07-13 08:00:39.684620] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e380, cid 3, qid 0 00:14:34.000 [2024-07-13 08:00:39.684666] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.000 [2024-07-13 08:00:39.684673] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.000 [2024-07-13 08:00:39.684677] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684682] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e380) on tqpair=0x10076c0 00:14:34.000 [2024-07-13 08:00:39.684694] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684698] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.684702] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10076c0) 00:14:34.000 [2024-07-13 08:00:39.684710] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.000 [2024-07-13 08:00:39.684728] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e380, cid 3, qid 0 00:14:34.000 [2024-07-13 08:00:39.688832] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.000 [2024-07-13 08:00:39.688855] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.000 [2024-07-13 08:00:39.688860] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.688865] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e380) on tqpair=0x10076c0 00:14:34.000 [2024-07-13 08:00:39.688881] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.688886] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.688890] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x10076c0) 00:14:34.000 [2024-07-13 08:00:39.688899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.000 [2024-07-13 08:00:39.688924] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x103e380, cid 3, qid 0 00:14:34.000 [2024-07-13 08:00:39.688976] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.000 [2024-07-13 08:00:39.688983] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.000 [2024-07-13 08:00:39.688987] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.000 [2024-07-13 08:00:39.688991] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x103e380) on tqpair=0x10076c0 00:14:34.000 [2024-07-13 08:00:39.689000] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 5 milliseconds 00:14:34.000 00:14:34.000 08:00:39 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:14:34.000 [2024-07-13 08:00:39.724110] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:34.000 [2024-07-13 08:00:39.724146] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76816 ] 00:14:34.271 [2024-07-13 08:00:39.859581] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:14:34.271 [2024-07-13 08:00:39.859641] nvme_tcp.c:2244:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:14:34.271 [2024-07-13 08:00:39.859648] nvme_tcp.c:2248:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:14:34.271 [2024-07-13 08:00:39.859660] nvme_tcp.c:2266:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:14:34.271 [2024-07-13 08:00:39.859671] sock.c: 334:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl uring 00:14:34.271 [2024-07-13 08:00:39.859799] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:14:34.271 [2024-07-13 08:00:39.859851] nvme_tcp.c:1487:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b4a6c0 0 00:14:34.271 [2024-07-13 08:00:39.864820] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:14:34.271 [2024-07-13 08:00:39.864847] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:14:34.271 [2024-07-13 08:00:39.864854] nvme_tcp.c:1533:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:14:34.271 [2024-07-13 08:00:39.864858] nvme_tcp.c:1534:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:14:34.271 [2024-07-13 08:00:39.864906] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.271 [2024-07-13 08:00:39.864913] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.271 [2024-07-13 08:00:39.864918] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b4a6c0) 00:14:34.271 [2024-07-13 08:00:39.864931] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:14:34.271 [2024-07-13 08:00:39.864962] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b80f60, cid 0, qid 0 00:14:34.271 [2024-07-13 08:00:39.871845] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.271 [2024-07-13 08:00:39.871867] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.271 [2024-07-13 08:00:39.871873] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.271 [2024-07-13 08:00:39.871878] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b80f60) on tqpair=0x1b4a6c0 00:14:34.271 [2024-07-13 08:00:39.871892] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:14:34.271 [2024-07-13 08:00:39.871900] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:14:34.271 [2024-07-13 08:00:39.871906] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:14:34.271 [2024-07-13 08:00:39.871921] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.271 [2024-07-13 08:00:39.871927] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.271 [2024-07-13 08:00:39.871931] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b4a6c0) 00:14:34.271 [2024-07-13 08:00:39.871941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.272 [2024-07-13 08:00:39.871968] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b80f60, cid 0, qid 0 00:14:34.272 [2024-07-13 08:00:39.872030] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.272 [2024-07-13 08:00:39.872037] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.272 [2024-07-13 08:00:39.872041] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.272 [2024-07-13 08:00:39.872046] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b80f60) on tqpair=0x1b4a6c0 00:14:34.272 [2024-07-13 08:00:39.872053] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:14:34.272 [2024-07-13 08:00:39.872061] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:14:34.272 [2024-07-13 08:00:39.872069] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.272 [2024-07-13 08:00:39.872073] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.272 [2024-07-13 08:00:39.872077] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b4a6c0) 00:14:34.272 [2024-07-13 08:00:39.872085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.272 [2024-07-13 08:00:39.872104] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b80f60, cid 0, qid 0 00:14:34.272 [2024-07-13 08:00:39.872477] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.272 [2024-07-13 08:00:39.872493] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.272 [2024-07-13 08:00:39.872498] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.272 [2024-07-13 08:00:39.872503] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b80f60) on tqpair=0x1b4a6c0 00:14:34.272 [2024-07-13 08:00:39.872510] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:14:34.272 [2024-07-13 08:00:39.872520] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:14:34.272 [2024-07-13 08:00:39.872529] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.272 [2024-07-13 08:00:39.872533] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.272 [2024-07-13 08:00:39.872537] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b4a6c0) 00:14:34.272 [2024-07-13 08:00:39.872545] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.272 [2024-07-13 08:00:39.872566] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b80f60, cid 0, qid 0 00:14:34.272 [2024-07-13 08:00:39.872623] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.272 [2024-07-13 08:00:39.872630] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.272 [2024-07-13 08:00:39.872634] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.272 [2024-07-13 08:00:39.872638] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b80f60) on tqpair=0x1b4a6c0 00:14:34.272 [2024-07-13 08:00:39.872645] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:34.272 [2024-07-13 08:00:39.872656] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.272 [2024-07-13 08:00:39.872661] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.272 [2024-07-13 08:00:39.872665] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b4a6c0) 00:14:34.272 [2024-07-13 08:00:39.872672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.272 [2024-07-13 08:00:39.872690] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b80f60, cid 0, qid 0 00:14:34.272 [2024-07-13 08:00:39.873131] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.272 [2024-07-13 08:00:39.873147] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.272 [2024-07-13 08:00:39.873152] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.272 [2024-07-13 08:00:39.873157] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b80f60) on tqpair=0x1b4a6c0 00:14:34.272 [2024-07-13 08:00:39.873164] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:14:34.272 [2024-07-13 08:00:39.873170] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:14:34.272 [2024-07-13 08:00:39.873179] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:34.272 [2024-07-13 08:00:39.873286] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:14:34.272 [2024-07-13 08:00:39.873290] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:34.272 [2024-07-13 08:00:39.873300] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.272 [2024-07-13 08:00:39.873305] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.272 [2024-07-13 08:00:39.873309] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b4a6c0) 00:14:34.272 [2024-07-13 08:00:39.873317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.272 [2024-07-13 08:00:39.873340] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b80f60, cid 0, qid 0 00:14:34.272 [2024-07-13 08:00:39.873832] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.272 [2024-07-13 08:00:39.873841] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.272 [2024-07-13 08:00:39.873845] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.272 [2024-07-13 08:00:39.873850] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b80f60) on tqpair=0x1b4a6c0 00:14:34.272 [2024-07-13 08:00:39.873857] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:34.272 [2024-07-13 08:00:39.873868] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.272 [2024-07-13 08:00:39.873873] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.272 [2024-07-13 08:00:39.873877] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b4a6c0) 00:14:34.272 [2024-07-13 08:00:39.873885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.272 [2024-07-13 08:00:39.873905] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b80f60, cid 0, qid 0 00:14:34.272 [2024-07-13 08:00:39.873959] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.273 [2024-07-13 08:00:39.873966] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.273 [2024-07-13 08:00:39.873970] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.273 [2024-07-13 08:00:39.873975] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b80f60) on tqpair=0x1b4a6c0 00:14:34.273 [2024-07-13 08:00:39.873981] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:34.273 [2024-07-13 08:00:39.873987] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:14:34.273 [2024-07-13 08:00:39.873995] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:14:34.273 [2024-07-13 08:00:39.874010] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:14:34.273 [2024-07-13 08:00:39.874021] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.273 [2024-07-13 08:00:39.874026] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.273 [2024-07-13 08:00:39.874030] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b4a6c0) 00:14:34.273 [2024-07-13 08:00:39.874038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.273 [2024-07-13 08:00:39.874079] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b80f60, cid 0, qid 0 00:14:34.273 [2024-07-13 08:00:39.874167] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.273 [2024-07-13 08:00:39.874175] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.273 [2024-07-13 08:00:39.874179] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.273 [2024-07-13 08:00:39.874184] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b4a6c0): datao=0, datal=4096, cccid=0 00:14:34.273 [2024-07-13 08:00:39.874189] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b80f60) on tqpair(0x1b4a6c0): expected_datao=0, payload_size=4096 00:14:34.273 [2024-07-13 08:00:39.874198] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.273 [2024-07-13 08:00:39.874204] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.273 [2024-07-13 08:00:39.874213] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.273 [2024-07-13 08:00:39.874219] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.273 [2024-07-13 08:00:39.874223] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.273 [2024-07-13 08:00:39.874228] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b80f60) on tqpair=0x1b4a6c0 00:14:34.273 [2024-07-13 08:00:39.874237] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:14:34.273 [2024-07-13 08:00:39.874244] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:14:34.273 [2024-07-13 08:00:39.874250] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:14:34.273 [2024-07-13 08:00:39.874254] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:14:34.273 [2024-07-13 08:00:39.874260] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:14:34.273 [2024-07-13 08:00:39.874265] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:14:34.273 [2024-07-13 08:00:39.874279] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:14:34.273 [2024-07-13 08:00:39.874288] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.273 [2024-07-13 08:00:39.874293] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.273 [2024-07-13 08:00:39.874297] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b4a6c0) 00:14:34.273 [2024-07-13 08:00:39.874305] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:34.273 [2024-07-13 08:00:39.874325] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b80f60, cid 0, qid 0 00:14:34.273 [2024-07-13 08:00:39.874386] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.273 [2024-07-13 08:00:39.874393] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.273 [2024-07-13 08:00:39.874397] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.273 [2024-07-13 08:00:39.874401] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b80f60) on tqpair=0x1b4a6c0 00:14:34.273 [2024-07-13 08:00:39.874411] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.273 [2024-07-13 08:00:39.874415] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.273 [2024-07-13 08:00:39.874420] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b4a6c0) 00:14:34.273 [2024-07-13 08:00:39.874427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.273 [2024-07-13 08:00:39.874433] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.273 [2024-07-13 08:00:39.874438] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.274 [2024-07-13 08:00:39.874442] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b4a6c0) 00:14:34.274 [2024-07-13 08:00:39.874448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.274 [2024-07-13 08:00:39.874455] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.274 [2024-07-13 08:00:39.874459] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.274 [2024-07-13 08:00:39.874463] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b4a6c0) 00:14:34.274 [2024-07-13 08:00:39.874469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.274 [2024-07-13 08:00:39.874476] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.274 [2024-07-13 08:00:39.874480] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.274 [2024-07-13 08:00:39.874484] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.274 [2024-07-13 08:00:39.874490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.274 [2024-07-13 08:00:39.874496] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:34.274 [2024-07-13 08:00:39.874509] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:34.274 [2024-07-13 08:00:39.874518] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.274 [2024-07-13 08:00:39.874523] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.274 [2024-07-13 08:00:39.874527] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b4a6c0) 00:14:34.274 [2024-07-13 08:00:39.874534] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.274 [2024-07-13 08:00:39.874554] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b80f60, cid 0, qid 0 00:14:34.274 [2024-07-13 08:00:39.874562] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b810c0, cid 1, qid 0 00:14:34.274 [2024-07-13 08:00:39.874567] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81220, cid 2, qid 0 00:14:34.274 [2024-07-13 08:00:39.874572] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.274 [2024-07-13 08:00:39.874577] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b814e0, cid 4, qid 0 00:14:34.274 [2024-07-13 08:00:39.874671] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.274 [2024-07-13 08:00:39.874678] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.274 [2024-07-13 08:00:39.874682] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.274 [2024-07-13 08:00:39.874687] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b814e0) on tqpair=0x1b4a6c0 00:14:34.274 [2024-07-13 08:00:39.874694] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:14:34.274 [2024-07-13 08:00:39.874700] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:34.274 [2024-07-13 08:00:39.874709] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:14:34.274 [2024-07-13 08:00:39.874719] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:34.274 [2024-07-13 08:00:39.874727] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.274 [2024-07-13 08:00:39.874732] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.274 [2024-07-13 08:00:39.874736] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b4a6c0) 00:14:34.274 [2024-07-13 08:00:39.874743] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:14:34.274 [2024-07-13 08:00:39.874762] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b814e0, cid 4, qid 0 00:14:34.274 [2024-07-13 08:00:39.875369] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.274 [2024-07-13 08:00:39.875383] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.274 [2024-07-13 08:00:39.875388] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.274 [2024-07-13 08:00:39.875392] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b814e0) on tqpair=0x1b4a6c0 00:14:34.274 [2024-07-13 08:00:39.875459] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:14:34.274 [2024-07-13 08:00:39.875471] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:34.274 [2024-07-13 08:00:39.875479] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.274 [2024-07-13 08:00:39.875484] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.274 [2024-07-13 08:00:39.875488] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b4a6c0) 00:14:34.274 [2024-07-13 08:00:39.875496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.274 [2024-07-13 08:00:39.875517] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b814e0, cid 4, qid 0 00:14:34.274 [2024-07-13 08:00:39.879824] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.274 [2024-07-13 08:00:39.879842] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.275 [2024-07-13 08:00:39.879847] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.275 [2024-07-13 08:00:39.879852] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b4a6c0): datao=0, datal=4096, cccid=4 00:14:34.275 [2024-07-13 08:00:39.879857] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b814e0) on tqpair(0x1b4a6c0): expected_datao=0, payload_size=4096 00:14:34.275 [2024-07-13 08:00:39.879866] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.275 [2024-07-13 08:00:39.879871] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.275 [2024-07-13 08:00:39.879877] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.275 [2024-07-13 08:00:39.879884] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.275 [2024-07-13 08:00:39.879888] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.275 [2024-07-13 08:00:39.879892] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b814e0) on tqpair=0x1b4a6c0 00:14:34.275 [2024-07-13 08:00:39.879911] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:14:34.275 [2024-07-13 08:00:39.879922] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:14:34.275 [2024-07-13 08:00:39.879934] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:14:34.275 [2024-07-13 08:00:39.879943] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.275 [2024-07-13 08:00:39.879948] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.275 [2024-07-13 08:00:39.879952] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b4a6c0) 00:14:34.275 [2024-07-13 08:00:39.879961] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.275 [2024-07-13 08:00:39.879986] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b814e0, cid 4, qid 0 00:14:34.275 [2024-07-13 08:00:39.880413] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.275 [2024-07-13 08:00:39.880426] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.275 [2024-07-13 08:00:39.880431] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.275 [2024-07-13 08:00:39.880435] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b4a6c0): datao=0, datal=4096, cccid=4 00:14:34.275 [2024-07-13 08:00:39.880440] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b814e0) on tqpair(0x1b4a6c0): expected_datao=0, payload_size=4096 00:14:34.275 [2024-07-13 08:00:39.880448] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.275 [2024-07-13 08:00:39.880453] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.275 [2024-07-13 08:00:39.880946] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.275 [2024-07-13 08:00:39.880958] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.275 [2024-07-13 08:00:39.880962] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.275 [2024-07-13 08:00:39.880967] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b814e0) on tqpair=0x1b4a6c0 00:14:34.275 [2024-07-13 08:00:39.880984] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:34.275 [2024-07-13 08:00:39.880996] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:34.275 [2024-07-13 08:00:39.881005] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.275 [2024-07-13 08:00:39.881010] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.275 [2024-07-13 08:00:39.881014] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b4a6c0) 00:14:34.275 [2024-07-13 08:00:39.881022] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.275 [2024-07-13 08:00:39.881044] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b814e0, cid 4, qid 0 00:14:34.275 [2024-07-13 08:00:39.881597] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.275 [2024-07-13 08:00:39.881610] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.275 [2024-07-13 08:00:39.881615] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.275 [2024-07-13 08:00:39.881619] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b4a6c0): datao=0, datal=4096, cccid=4 00:14:34.275 [2024-07-13 08:00:39.881624] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b814e0) on tqpair(0x1b4a6c0): expected_datao=0, payload_size=4096 00:14:34.275 [2024-07-13 08:00:39.881633] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.275 [2024-07-13 08:00:39.881637] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.275 [2024-07-13 08:00:39.881647] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.275 [2024-07-13 08:00:39.881653] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.276 [2024-07-13 08:00:39.881657] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.276 [2024-07-13 08:00:39.881662] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b814e0) on tqpair=0x1b4a6c0 00:14:34.276 [2024-07-13 08:00:39.881687] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:34.276 [2024-07-13 08:00:39.881697] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:14:34.276 [2024-07-13 08:00:39.881708] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:14:34.276 [2024-07-13 08:00:39.881715] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:34.276 [2024-07-13 08:00:39.881721] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:14:34.276 [2024-07-13 08:00:39.881726] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:14:34.276 [2024-07-13 08:00:39.881731] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:14:34.276 [2024-07-13 08:00:39.881737] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:14:34.276 [2024-07-13 08:00:39.881752] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.276 [2024-07-13 08:00:39.881758] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.276 [2024-07-13 08:00:39.881762] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b4a6c0) 00:14:34.276 [2024-07-13 08:00:39.881770] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.276 [2024-07-13 08:00:39.881777] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.276 [2024-07-13 08:00:39.881782] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.276 [2024-07-13 08:00:39.881802] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b4a6c0) 00:14:34.276 [2024-07-13 08:00:39.881821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:14:34.276 [2024-07-13 08:00:39.881848] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b814e0, cid 4, qid 0 00:14:34.276 [2024-07-13 08:00:39.881857] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81640, cid 5, qid 0 00:14:34.276 [2024-07-13 08:00:39.881932] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.276 [2024-07-13 08:00:39.881939] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.276 [2024-07-13 08:00:39.881943] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.276 [2024-07-13 08:00:39.881947] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b814e0) on tqpair=0x1b4a6c0 00:14:34.276 [2024-07-13 08:00:39.881956] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.276 [2024-07-13 08:00:39.881963] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.276 [2024-07-13 08:00:39.881967] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.276 [2024-07-13 08:00:39.881972] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81640) on tqpair=0x1b4a6c0 00:14:34.276 [2024-07-13 08:00:39.881984] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.276 [2024-07-13 08:00:39.881989] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.276 [2024-07-13 08:00:39.881993] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b4a6c0) 00:14:34.276 [2024-07-13 08:00:39.882000] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.276 [2024-07-13 08:00:39.882019] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81640, cid 5, qid 0 00:14:34.276 [2024-07-13 08:00:39.882082] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.276 [2024-07-13 08:00:39.882091] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.276 [2024-07-13 08:00:39.882095] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.276 [2024-07-13 08:00:39.882099] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81640) on tqpair=0x1b4a6c0 00:14:34.276 [2024-07-13 08:00:39.882112] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.276 [2024-07-13 08:00:39.882116] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.276 [2024-07-13 08:00:39.882120] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b4a6c0) 00:14:34.276 [2024-07-13 08:00:39.882128] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.276 [2024-07-13 08:00:39.882147] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81640, cid 5, qid 0 00:14:34.276 [2024-07-13 08:00:39.882195] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.276 [2024-07-13 08:00:39.882202] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.276 [2024-07-13 08:00:39.882206] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.276 [2024-07-13 08:00:39.882210] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81640) on tqpair=0x1b4a6c0 00:14:34.276 [2024-07-13 08:00:39.882222] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.276 [2024-07-13 08:00:39.882227] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.276 [2024-07-13 08:00:39.882231] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b4a6c0) 00:14:34.276 [2024-07-13 08:00:39.882239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.276 [2024-07-13 08:00:39.882256] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81640, cid 5, qid 0 00:14:34.276 [2024-07-13 08:00:39.882304] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.277 [2024-07-13 08:00:39.882311] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.277 [2024-07-13 08:00:39.882315] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882319] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81640) on tqpair=0x1b4a6c0 00:14:34.277 [2024-07-13 08:00:39.882334] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882340] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882344] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b4a6c0) 00:14:34.277 [2024-07-13 08:00:39.882351] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.277 [2024-07-13 08:00:39.882359] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882364] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882384] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b4a6c0) 00:14:34.277 [2024-07-13 08:00:39.882391] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.277 [2024-07-13 08:00:39.882399] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882403] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882407] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1b4a6c0) 00:14:34.277 [2024-07-13 08:00:39.882413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.277 [2024-07-13 08:00:39.882421] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882425] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882429] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b4a6c0) 00:14:34.277 [2024-07-13 08:00:39.882435] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.277 [2024-07-13 08:00:39.882454] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81640, cid 5, qid 0 00:14:34.277 [2024-07-13 08:00:39.882461] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b814e0, cid 4, qid 0 00:14:34.277 [2024-07-13 08:00:39.882466] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b817a0, cid 6, qid 0 00:14:34.277 [2024-07-13 08:00:39.882471] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81900, cid 7, qid 0 00:14:34.277 [2024-07-13 08:00:39.882599] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.277 [2024-07-13 08:00:39.882606] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.277 [2024-07-13 08:00:39.882625] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882629] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b4a6c0): datao=0, datal=8192, cccid=5 00:14:34.277 [2024-07-13 08:00:39.882634] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b81640) on tqpair(0x1b4a6c0): expected_datao=0, payload_size=8192 00:14:34.277 [2024-07-13 08:00:39.882651] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882656] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882662] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.277 [2024-07-13 08:00:39.882668] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.277 [2024-07-13 08:00:39.882672] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882675] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b4a6c0): datao=0, datal=512, cccid=4 00:14:34.277 [2024-07-13 08:00:39.882680] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b814e0) on tqpair(0x1b4a6c0): expected_datao=0, payload_size=512 00:14:34.277 [2024-07-13 08:00:39.882687] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882691] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882697] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.277 [2024-07-13 08:00:39.882703] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.277 [2024-07-13 08:00:39.882706] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882710] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b4a6c0): datao=0, datal=512, cccid=6 00:14:34.277 [2024-07-13 08:00:39.882715] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b817a0) on tqpair(0x1b4a6c0): expected_datao=0, payload_size=512 00:14:34.277 [2024-07-13 08:00:39.882722] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882727] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882733] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:14:34.277 [2024-07-13 08:00:39.882738] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:14:34.277 [2024-07-13 08:00:39.882759] nvme_tcp.c:1650:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882763] nvme_tcp.c:1651:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b4a6c0): datao=0, datal=4096, cccid=7 00:14:34.277 [2024-07-13 08:00:39.882768] nvme_tcp.c:1662:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b81900) on tqpair(0x1b4a6c0): expected_datao=0, payload_size=4096 00:14:34.277 [2024-07-13 08:00:39.882776] nvme_tcp.c:1453:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882780] nvme_tcp.c:1237:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:14:34.277 [2024-07-13 08:00:39.882802] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.277 [2024-07-13 08:00:39.882808] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.278 [2024-07-13 08:00:39.882812] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.278 [2024-07-13 08:00:39.882816] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81640) on tqpair=0x1b4a6c0 00:14:34.278 [2024-07-13 08:00:39.882833] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.278 [2024-07-13 08:00:39.882854] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.278 [2024-07-13 08:00:39.882858] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.278 [2024-07-13 08:00:39.882862] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b814e0) on tqpair=0x1b4a6c0 00:14:34.278 [2024-07-13 08:00:39.882879] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.278 [2024-07-13 08:00:39.882885] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.278 [2024-07-13 08:00:39.882889] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.278 [2024-07-13 08:00:39.882894] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b817a0) on tqpair=0x1b4a6c0 00:14:34.278 [2024-07-13 08:00:39.882902] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.278 [2024-07-13 08:00:39.882909] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.278 [2024-07-13 08:00:39.882913] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.278 ===================================================== 00:14:34.278 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:34.278 ===================================================== 00:14:34.278 Controller Capabilities/Features 00:14:34.278 ================================ 00:14:34.278 Vendor ID: 8086 00:14:34.278 Subsystem Vendor ID: 8086 00:14:34.278 Serial Number: SPDK00000000000001 00:14:34.278 Model Number: SPDK bdev Controller 00:14:34.278 Firmware Version: 24.01.1 00:14:34.278 Recommended Arb Burst: 6 00:14:34.278 IEEE OUI Identifier: e4 d2 5c 00:14:34.278 Multi-path I/O 00:14:34.278 May have multiple subsystem ports: Yes 00:14:34.278 May have multiple controllers: Yes 00:14:34.278 Associated with SR-IOV VF: No 00:14:34.278 Max Data Transfer Size: 131072 00:14:34.278 Max Number of Namespaces: 32 00:14:34.278 Max Number of I/O Queues: 127 00:14:34.278 NVMe Specification Version (VS): 1.3 00:14:34.278 NVMe Specification Version (Identify): 1.3 00:14:34.278 Maximum Queue Entries: 128 00:14:34.278 Contiguous Queues Required: Yes 00:14:34.278 Arbitration Mechanisms Supported 00:14:34.278 Weighted Round Robin: Not Supported 00:14:34.278 Vendor Specific: Not Supported 00:14:34.278 Reset Timeout: 15000 ms 00:14:34.278 Doorbell Stride: 4 bytes 00:14:34.278 NVM Subsystem Reset: Not Supported 00:14:34.278 Command Sets Supported 00:14:34.278 NVM Command Set: Supported 00:14:34.278 Boot Partition: Not Supported 00:14:34.278 Memory Page Size Minimum: 4096 bytes 00:14:34.278 Memory Page Size Maximum: 4096 bytes 00:14:34.278 Persistent Memory Region: Not Supported 00:14:34.278 Optional Asynchronous Events Supported 00:14:34.278 Namespace Attribute Notices: Supported 00:14:34.278 Firmware Activation Notices: Not Supported 00:14:34.278 ANA Change Notices: Not Supported 00:14:34.278 PLE Aggregate Log Change Notices: Not Supported 00:14:34.278 LBA Status Info Alert Notices: Not Supported 00:14:34.278 EGE Aggregate Log Change Notices: Not Supported 00:14:34.278 Normal NVM Subsystem Shutdown event: Not Supported 00:14:34.278 Zone Descriptor Change Notices: Not Supported 00:14:34.278 Discovery Log Change Notices: Not Supported 00:14:34.278 Controller Attributes 00:14:34.278 128-bit Host Identifier: Supported 00:14:34.278 Non-Operational Permissive Mode: Not Supported 00:14:34.278 NVM Sets: Not Supported 00:14:34.278 Read Recovery Levels: Not Supported 00:14:34.278 Endurance Groups: Not Supported 00:14:34.278 Predictable Latency Mode: Not Supported 00:14:34.278 Traffic Based Keep ALive: Not Supported 00:14:34.278 Namespace Granularity: Not Supported 00:14:34.278 SQ Associations: Not Supported 00:14:34.278 UUID List: Not Supported 00:14:34.278 Multi-Domain Subsystem: Not Supported 00:14:34.278 Fixed Capacity Management: Not Supported 00:14:34.278 Variable Capacity Management: Not Supported 00:14:34.278 Delete Endurance Group: Not Supported 00:14:34.278 Delete NVM Set: Not Supported 00:14:34.278 Extended LBA Formats Supported: Not Supported 00:14:34.278 Flexible Data Placement Supported: Not Supported 00:14:34.278 00:14:34.278 Controller Memory Buffer Support 00:14:34.278 ================================ 00:14:34.278 Supported: No 00:14:34.278 00:14:34.278 Persistent Memory Region Support 00:14:34.278 ================================ 00:14:34.278 Supported: No 00:14:34.278 00:14:34.278 Admin Command Set Attributes 00:14:34.278 ============================ 00:14:34.278 Security Send/Receive: Not Supported 00:14:34.278 Format NVM: Not Supported 00:14:34.278 Firmware Activate/Download: Not Supported 00:14:34.278 Namespace Management: Not Supported 00:14:34.278 Device Self-Test: Not Supported 00:14:34.278 Directives: Not Supported 00:14:34.278 NVMe-MI: Not Supported 00:14:34.278 Virtualization Management: Not Supported 00:14:34.278 Doorbell Buffer Config: Not Supported 00:14:34.278 Get LBA Status Capability: Not Supported 00:14:34.278 Command & Feature Lockdown Capability: Not Supported 00:14:34.278 Abort Command Limit: 4 00:14:34.278 Async Event Request Limit: 4 00:14:34.278 Number of Firmware Slots: N/A 00:14:34.278 Firmware Slot 1 Read-Only: N/A 00:14:34.278 Firmware Activation Without Reset: N/A 00:14:34.278 Multiple Update Detection Support: N/A 00:14:34.278 Firmware Update Granularity: No Information Provided 00:14:34.278 Per-Namespace SMART Log: No 00:14:34.278 Asymmetric Namespace Access Log Page: Not Supported 00:14:34.278 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:14:34.278 Command Effects Log Page: Supported 00:14:34.278 Get Log Page Extended Data: Supported 00:14:34.278 Telemetry Log Pages: Not Supported 00:14:34.278 Persistent Event Log Pages: Not Supported 00:14:34.278 Supported Log Pages Log Page: May Support 00:14:34.279 Commands Supported & Effects Log Page: Not Supported 00:14:34.279 Feature Identifiers & Effects Log Page:May Support 00:14:34.279 NVMe-MI Commands & Effects Log Page: May Support 00:14:34.279 Data Area 4 for Telemetry Log: Not Supported 00:14:34.279 Error Log Page Entries Supported: 128 00:14:34.279 Keep Alive: Supported 00:14:34.279 Keep Alive Granularity: 10000 ms 00:14:34.279 00:14:34.279 NVM Command Set Attributes 00:14:34.279 ========================== 00:14:34.279 Submission Queue Entry Size 00:14:34.279 Max: 64 00:14:34.279 Min: 64 00:14:34.279 Completion Queue Entry Size 00:14:34.279 Max: 16 00:14:34.279 Min: 16 00:14:34.279 Number of Namespaces: 32 00:14:34.279 Compare Command: Supported 00:14:34.279 Write Uncorrectable Command: Not Supported 00:14:34.279 Dataset Management Command: Supported 00:14:34.279 Write Zeroes Command: Supported 00:14:34.279 Set Features Save Field: Not Supported 00:14:34.279 Reservations: Supported 00:14:34.279 Timestamp: Not Supported 00:14:34.279 Copy: Supported 00:14:34.279 Volatile Write Cache: Present 00:14:34.279 Atomic Write Unit (Normal): 1 00:14:34.279 Atomic Write Unit (PFail): 1 00:14:34.279 Atomic Compare & Write Unit: 1 00:14:34.279 Fused Compare & Write: Supported 00:14:34.279 Scatter-Gather List 00:14:34.279 SGL Command Set: Supported 00:14:34.279 SGL Keyed: Supported 00:14:34.279 SGL Bit Bucket Descriptor: Not Supported 00:14:34.279 SGL Metadata Pointer: Not Supported 00:14:34.279 Oversized SGL: Not Supported 00:14:34.279 SGL Metadata Address: Not Supported 00:14:34.279 SGL Offset: Supported 00:14:34.279 Transport SGL Data Block: Not Supported 00:14:34.279 Replay Protected Memory Block: Not Supported 00:14:34.279 00:14:34.279 Firmware Slot Information 00:14:34.279 ========================= 00:14:34.279 Active slot: 1 00:14:34.279 Slot 1 Firmware Revision: 24.01.1 00:14:34.279 00:14:34.279 00:14:34.279 Commands Supported and Effects 00:14:34.279 ============================== 00:14:34.279 Admin Commands 00:14:34.279 -------------- 00:14:34.279 Get Log Page (02h): Supported 00:14:34.279 Identify (06h): Supported 00:14:34.279 Abort (08h): Supported 00:14:34.279 Set Features (09h): Supported 00:14:34.279 Get Features (0Ah): Supported 00:14:34.279 Asynchronous Event Request (0Ch): Supported 00:14:34.279 Keep Alive (18h): Supported 00:14:34.279 I/O Commands 00:14:34.279 ------------ 00:14:34.279 Flush (00h): Supported LBA-Change 00:14:34.279 Write (01h): Supported LBA-Change 00:14:34.279 Read (02h): Supported 00:14:34.279 Compare (05h): Supported 00:14:34.279 Write Zeroes (08h): Supported LBA-Change 00:14:34.279 Dataset Management (09h): Supported LBA-Change 00:14:34.279 Copy (19h): Supported LBA-Change 00:14:34.279 Unknown (79h): Supported LBA-Change 00:14:34.279 Unknown (7Ah): Supported 00:14:34.279 00:14:34.279 Error Log 00:14:34.279 ========= 00:14:34.279 00:14:34.279 Arbitration 00:14:34.279 =========== 00:14:34.279 Arbitration Burst: 1 00:14:34.279 00:14:34.279 Power Management 00:14:34.279 ================ 00:14:34.279 Number of Power States: 1 00:14:34.279 Current Power State: Power State #0 00:14:34.279 Power State #0: 00:14:34.279 Max Power: 0.00 W 00:14:34.279 Non-Operational State: Operational 00:14:34.279 Entry Latency: Not Reported 00:14:34.280 Exit Latency: Not Reported 00:14:34.280 Relative Read Throughput: 0 00:14:34.280 Relative Read Latency: 0 00:14:34.280 Relative Write Throughput: 0 00:14:34.280 Relative Write Latency: 0 00:14:34.280 Idle Power: Not Reported 00:14:34.280 Active Power: Not Reported 00:14:34.280 Non-Operational Permissive Mode: Not Supported 00:14:34.280 00:14:34.280 Health Information 00:14:34.280 ================== 00:14:34.280 Critical Warnings: 00:14:34.280 Available Spare Space: OK 00:14:34.280 Temperature: OK 00:14:34.280 Device Reliability: OK 00:14:34.280 Read Only: No 00:14:34.280 Volatile Memory Backup: OK 00:14:34.280 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:34.280 Temperature Threshold: [2024-07-13 08:00:39.882917] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81900) on tqpair=0x1b4a6c0 00:14:34.280 [2024-07-13 08:00:39.883031] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.280 [2024-07-13 08:00:39.883038] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.280 [2024-07-13 08:00:39.883043] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1b4a6c0) 00:14:34.280 [2024-07-13 08:00:39.883051] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.280 [2024-07-13 08:00:39.883075] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81900, cid 7, qid 0 00:14:34.280 [2024-07-13 08:00:39.883629] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.280 [2024-07-13 08:00:39.886790] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.280 [2024-07-13 08:00:39.886808] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.280 [2024-07-13 08:00:39.886814] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81900) on tqpair=0x1b4a6c0 00:14:34.280 [2024-07-13 08:00:39.886857] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:14:34.280 [2024-07-13 08:00:39.886874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.280 [2024-07-13 08:00:39.886882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.280 [2024-07-13 08:00:39.886889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.280 [2024-07-13 08:00:39.886896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:34.280 [2024-07-13 08:00:39.886906] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.280 [2024-07-13 08:00:39.886911] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.280 [2024-07-13 08:00:39.886916] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.280 [2024-07-13 08:00:39.886925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.280 [2024-07-13 08:00:39.886971] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.280 [2024-07-13 08:00:39.887033] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.280 [2024-07-13 08:00:39.887040] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.280 [2024-07-13 08:00:39.887044] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.280 [2024-07-13 08:00:39.887048] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.280 [2024-07-13 08:00:39.887057] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.280 [2024-07-13 08:00:39.887062] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.280 [2024-07-13 08:00:39.887066] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.280 [2024-07-13 08:00:39.887073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.280 [2024-07-13 08:00:39.887095] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.280 [2024-07-13 08:00:39.887183] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.280 [2024-07-13 08:00:39.887190] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.281 [2024-07-13 08:00:39.887193] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887197] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.281 [2024-07-13 08:00:39.887204] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:14:34.281 [2024-07-13 08:00:39.887209] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:14:34.281 [2024-07-13 08:00:39.887219] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887223] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887227] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.281 [2024-07-13 08:00:39.887234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.281 [2024-07-13 08:00:39.887251] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.281 [2024-07-13 08:00:39.887298] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.281 [2024-07-13 08:00:39.887304] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.281 [2024-07-13 08:00:39.887310] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887314] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.281 [2024-07-13 08:00:39.887326] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887330] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887334] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.281 [2024-07-13 08:00:39.887341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.281 [2024-07-13 08:00:39.887390] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.281 [2024-07-13 08:00:39.887443] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.281 [2024-07-13 08:00:39.887449] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.281 [2024-07-13 08:00:39.887453] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887457] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.281 [2024-07-13 08:00:39.887468] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887473] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887477] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.281 [2024-07-13 08:00:39.887484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.281 [2024-07-13 08:00:39.887500] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.281 [2024-07-13 08:00:39.887546] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.281 [2024-07-13 08:00:39.887553] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.281 [2024-07-13 08:00:39.887557] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887561] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.281 [2024-07-13 08:00:39.887572] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887576] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887580] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.281 [2024-07-13 08:00:39.887587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.281 [2024-07-13 08:00:39.887603] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.281 [2024-07-13 08:00:39.887647] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.281 [2024-07-13 08:00:39.887654] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.281 [2024-07-13 08:00:39.887658] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887662] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.281 [2024-07-13 08:00:39.887673] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887677] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887681] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.281 [2024-07-13 08:00:39.887688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.281 [2024-07-13 08:00:39.887704] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.281 [2024-07-13 08:00:39.887765] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.281 [2024-07-13 08:00:39.887772] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.281 [2024-07-13 08:00:39.887776] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887796] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.281 [2024-07-13 08:00:39.887807] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887812] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887815] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.281 [2024-07-13 08:00:39.887823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.281 [2024-07-13 08:00:39.887839] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.281 [2024-07-13 08:00:39.887897] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.281 [2024-07-13 08:00:39.887906] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.281 [2024-07-13 08:00:39.887910] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887914] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.281 [2024-07-13 08:00:39.887926] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887930] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.281 [2024-07-13 08:00:39.887934] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.281 [2024-07-13 08:00:39.887941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.282 [2024-07-13 08:00:39.887960] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.282 [2024-07-13 08:00:39.888004] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.282 [2024-07-13 08:00:39.888011] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.282 [2024-07-13 08:00:39.888015] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888019] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.282 [2024-07-13 08:00:39.888030] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888035] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888038] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.282 [2024-07-13 08:00:39.888045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.282 [2024-07-13 08:00:39.888062] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.282 [2024-07-13 08:00:39.888105] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.282 [2024-07-13 08:00:39.888112] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.282 [2024-07-13 08:00:39.888115] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888120] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.282 [2024-07-13 08:00:39.888131] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888135] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888139] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.282 [2024-07-13 08:00:39.888161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.282 [2024-07-13 08:00:39.888177] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.282 [2024-07-13 08:00:39.888230] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.282 [2024-07-13 08:00:39.888237] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.282 [2024-07-13 08:00:39.888241] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888246] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.282 [2024-07-13 08:00:39.888256] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888261] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888264] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.282 [2024-07-13 08:00:39.888271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.282 [2024-07-13 08:00:39.888287] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.282 [2024-07-13 08:00:39.888344] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.282 [2024-07-13 08:00:39.888351] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.282 [2024-07-13 08:00:39.888355] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888359] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.282 [2024-07-13 08:00:39.888370] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888374] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888378] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.282 [2024-07-13 08:00:39.888385] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.282 [2024-07-13 08:00:39.888402] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.282 [2024-07-13 08:00:39.888445] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.282 [2024-07-13 08:00:39.888451] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.282 [2024-07-13 08:00:39.888455] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888459] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.282 [2024-07-13 08:00:39.888470] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888475] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888479] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.282 [2024-07-13 08:00:39.888486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.282 [2024-07-13 08:00:39.888502] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.282 [2024-07-13 08:00:39.888545] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.282 [2024-07-13 08:00:39.888552] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.282 [2024-07-13 08:00:39.888556] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888560] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.282 [2024-07-13 08:00:39.888571] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888575] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888579] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.282 [2024-07-13 08:00:39.888586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.282 [2024-07-13 08:00:39.888602] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.282 [2024-07-13 08:00:39.888654] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.282 [2024-07-13 08:00:39.888661] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.282 [2024-07-13 08:00:39.888666] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888670] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.282 [2024-07-13 08:00:39.888681] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888685] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.282 [2024-07-13 08:00:39.888689] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.282 [2024-07-13 08:00:39.888696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.282 [2024-07-13 08:00:39.888713] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.282 [2024-07-13 08:00:39.888765] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.282 [2024-07-13 08:00:39.888771] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.282 [2024-07-13 08:00:39.888775] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.888779] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.283 [2024-07-13 08:00:39.888807] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.888811] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.888826] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.283 [2024-07-13 08:00:39.888834] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.283 [2024-07-13 08:00:39.888854] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.283 [2024-07-13 08:00:39.888900] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.283 [2024-07-13 08:00:39.888907] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.283 [2024-07-13 08:00:39.888911] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.888915] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.283 [2024-07-13 08:00:39.888927] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.888931] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.888935] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.283 [2024-07-13 08:00:39.888943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.283 [2024-07-13 08:00:39.888959] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.283 [2024-07-13 08:00:39.889002] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.283 [2024-07-13 08:00:39.889008] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.283 [2024-07-13 08:00:39.889012] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.889016] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.283 [2024-07-13 08:00:39.889028] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.889032] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.889036] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.283 [2024-07-13 08:00:39.889044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.283 [2024-07-13 08:00:39.889060] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.283 [2024-07-13 08:00:39.889106] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.283 [2024-07-13 08:00:39.889112] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.283 [2024-07-13 08:00:39.889117] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.889122] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.283 [2024-07-13 08:00:39.889133] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.889138] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.889142] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.283 [2024-07-13 08:00:39.889164] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.283 [2024-07-13 08:00:39.889181] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.283 [2024-07-13 08:00:39.889221] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.283 [2024-07-13 08:00:39.889228] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.283 [2024-07-13 08:00:39.889232] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.889236] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.283 [2024-07-13 08:00:39.889247] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.889251] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.889255] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.283 [2024-07-13 08:00:39.889262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.283 [2024-07-13 08:00:39.889278] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.283 [2024-07-13 08:00:39.889327] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.283 [2024-07-13 08:00:39.889334] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.283 [2024-07-13 08:00:39.889337] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.889341] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.283 [2024-07-13 08:00:39.889352] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.889357] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.889361] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.283 [2024-07-13 08:00:39.889368] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.283 [2024-07-13 08:00:39.889384] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.283 [2024-07-13 08:00:39.889424] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.283 [2024-07-13 08:00:39.889431] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.283 [2024-07-13 08:00:39.889435] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.889439] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.283 [2024-07-13 08:00:39.889450] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.889454] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.889458] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.283 [2024-07-13 08:00:39.889465] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.283 [2024-07-13 08:00:39.889481] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.283 [2024-07-13 08:00:39.889524] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.283 [2024-07-13 08:00:39.889531] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.283 [2024-07-13 08:00:39.889536] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.283 [2024-07-13 08:00:39.889540] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.284 [2024-07-13 08:00:39.889551] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.889556] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.889559] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.284 [2024-07-13 08:00:39.889566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.284 [2024-07-13 08:00:39.889583] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.284 [2024-07-13 08:00:39.889635] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.284 [2024-07-13 08:00:39.889641] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.284 [2024-07-13 08:00:39.889645] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.889649] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.284 [2024-07-13 08:00:39.889660] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.889665] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.889668] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.284 [2024-07-13 08:00:39.889675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.284 [2024-07-13 08:00:39.889692] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.284 [2024-07-13 08:00:39.889738] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.284 [2024-07-13 08:00:39.889744] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.284 [2024-07-13 08:00:39.889748] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.889752] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.284 [2024-07-13 08:00:39.889763] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.889768] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.889771] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.284 [2024-07-13 08:00:39.889778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.284 [2024-07-13 08:00:39.889822] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.284 [2024-07-13 08:00:39.889876] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.284 [2024-07-13 08:00:39.889883] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.284 [2024-07-13 08:00:39.889886] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.889891] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.284 [2024-07-13 08:00:39.889902] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.889907] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.889911] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.284 [2024-07-13 08:00:39.889918] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.284 [2024-07-13 08:00:39.889935] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.284 [2024-07-13 08:00:39.889980] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.284 [2024-07-13 08:00:39.889987] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.284 [2024-07-13 08:00:39.889992] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.889996] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.284 [2024-07-13 08:00:39.890008] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.890013] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.890017] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.284 [2024-07-13 08:00:39.890024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.284 [2024-07-13 08:00:39.890041] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.284 [2024-07-13 08:00:39.890116] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.284 [2024-07-13 08:00:39.890124] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.284 [2024-07-13 08:00:39.890129] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.890133] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.284 [2024-07-13 08:00:39.890145] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.890150] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.890154] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.284 [2024-07-13 08:00:39.890162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.284 [2024-07-13 08:00:39.890180] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.284 [2024-07-13 08:00:39.890228] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.284 [2024-07-13 08:00:39.890235] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.284 [2024-07-13 08:00:39.890239] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.890243] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.284 [2024-07-13 08:00:39.890255] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.890260] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.890264] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.284 [2024-07-13 08:00:39.890272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.284 [2024-07-13 08:00:39.890289] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.284 [2024-07-13 08:00:39.890341] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.284 [2024-07-13 08:00:39.890348] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.284 [2024-07-13 08:00:39.890353] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.890357] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.284 [2024-07-13 08:00:39.890369] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.890388] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.284 [2024-07-13 08:00:39.890392] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.284 [2024-07-13 08:00:39.890400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.285 [2024-07-13 08:00:39.890416] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.285 [2024-07-13 08:00:39.890485] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.285 [2024-07-13 08:00:39.890492] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.285 [2024-07-13 08:00:39.890497] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.285 [2024-07-13 08:00:39.890501] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.285 [2024-07-13 08:00:39.890512] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.285 [2024-07-13 08:00:39.890517] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.285 [2024-07-13 08:00:39.890521] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.285 [2024-07-13 08:00:39.890528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.285 [2024-07-13 08:00:39.890544] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.285 [2024-07-13 08:00:39.890610] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.285 [2024-07-13 08:00:39.890617] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.285 [2024-07-13 08:00:39.890620] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.285 [2024-07-13 08:00:39.890624] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.285 [2024-07-13 08:00:39.890635] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.285 [2024-07-13 08:00:39.890640] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.285 [2024-07-13 08:00:39.890643] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.285 [2024-07-13 08:00:39.890650] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.285 [2024-07-13 08:00:39.890666] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.285 [2024-07-13 08:00:39.890708] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.285 [2024-07-13 08:00:39.890714] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.285 [2024-07-13 08:00:39.890718] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.285 [2024-07-13 08:00:39.890722] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.285 [2024-07-13 08:00:39.890732] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.285 [2024-07-13 08:00:39.890737] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.285 [2024-07-13 08:00:39.890740] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.285 [2024-07-13 08:00:39.890747] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.285 [2024-07-13 08:00:39.890763] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.285 [2024-07-13 08:00:39.890819] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.285 [2024-07-13 08:00:39.890826] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.285 [2024-07-13 08:00:39.890830] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.285 [2024-07-13 08:00:39.890835] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.285 [2024-07-13 08:00:39.890846] nvme_tcp.c: 739:nvme_tcp_build_contig_request: *DEBUG*: enter 00:14:34.285 [2024-07-13 08:00:39.894869] nvme_tcp.c: 893:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:14:34.285 [2024-07-13 08:00:39.894875] nvme_tcp.c: 902:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b4a6c0) 00:14:34.285 [2024-07-13 08:00:39.894885] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:34.285 [2024-07-13 08:00:39.894916] nvme_tcp.c: 872:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b81380, cid 3, qid 0 00:14:34.285 [2024-07-13 08:00:39.894983] nvme_tcp.c:1105:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:14:34.285 [2024-07-13 08:00:39.894991] nvme_tcp.c:1888:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:14:34.285 [2024-07-13 08:00:39.894995] nvme_tcp.c:1580:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:14:34.285 [2024-07-13 08:00:39.895000] nvme_tcp.c: 857:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1b81380) on tqpair=0x1b4a6c0 00:14:34.285 [2024-07-13 08:00:39.895011] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:14:34.285 0 Kelvin (-273 Celsius) 00:14:34.285 Available Spare: 0% 00:14:34.285 Available Spare Threshold: 0% 00:14:34.285 Life Percentage Used: 0% 00:14:34.285 Data Units Read: 0 00:14:34.285 Data Units Written: 0 00:14:34.285 Host Read Commands: 0 00:14:34.285 Host Write Commands: 0 00:14:34.285 Controller Busy Time: 0 minutes 00:14:34.285 Power Cycles: 0 00:14:34.285 Power On Hours: 0 hours 00:14:34.285 Unsafe Shutdowns: 0 00:14:34.285 Unrecoverable Media Errors: 0 00:14:34.285 Lifetime Error Log Entries: 0 00:14:34.285 Warning Temperature Time: 0 minutes 00:14:34.285 Critical Temperature Time: 0 minutes 00:14:34.285 00:14:34.285 Number of Queues 00:14:34.285 ================ 00:14:34.285 Number of I/O Submission Queues: 127 00:14:34.285 Number of I/O Completion Queues: 127 00:14:34.285 00:14:34.285 Active Namespaces 00:14:34.285 ================= 00:14:34.285 Namespace ID:1 00:14:34.285 Error Recovery Timeout: Unlimited 00:14:34.285 Command Set Identifier: NVM (00h) 00:14:34.285 Deallocate: Supported 00:14:34.285 Deallocated/Unwritten Error: Not Supported 00:14:34.286 Deallocated Read Value: Unknown 00:14:34.286 Deallocate in Write Zeroes: Not Supported 00:14:34.286 Deallocated Guard Field: 0xFFFF 00:14:34.286 Flush: Supported 00:14:34.286 Reservation: Supported 00:14:34.286 Namespace Sharing Capabilities: Multiple Controllers 00:14:34.286 Size (in LBAs): 131072 (0GiB) 00:14:34.286 Capacity (in LBAs): 131072 (0GiB) 00:14:34.286 Utilization (in LBAs): 131072 (0GiB) 00:14:34.286 NGUID: ABCDEF0123456789ABCDEF0123456789 00:14:34.286 EUI64: ABCDEF0123456789 00:14:34.286 UUID: c0b20141-981d-4470-9e88-61b8230a43ba 00:14:34.286 Thin Provisioning: Not Supported 00:14:34.286 Per-NS Atomic Units: Yes 00:14:34.286 Atomic Boundary Size (Normal): 0 00:14:34.286 Atomic Boundary Size (PFail): 0 00:14:34.286 Atomic Boundary Offset: 0 00:14:34.286 Maximum Single Source Range Length: 65535 00:14:34.286 Maximum Copy Length: 65535 00:14:34.286 Maximum Source Range Count: 1 00:14:34.286 NGUID/EUI64 Never Reused: No 00:14:34.286 Namespace Write Protected: No 00:14:34.286 Number of LBA Formats: 1 00:14:34.286 Current LBA Format: LBA Format #00 00:14:34.286 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:34.286 00:14:34.286 08:00:39 -- host/identify.sh@51 -- # sync 00:14:34.286 08:00:39 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:34.286 08:00:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:14:34.286 08:00:39 -- common/autotest_common.sh@10 -- # set +x 00:14:34.286 08:00:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:14:34.286 08:00:39 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:14:34.286 08:00:39 -- host/identify.sh@56 -- # nvmftestfini 00:14:34.286 08:00:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:34.286 08:00:39 -- nvmf/common.sh@116 -- # sync 00:14:34.286 08:00:39 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:14:34.286 08:00:39 -- nvmf/common.sh@119 -- # set +e 00:14:34.286 08:00:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:34.286 08:00:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:14:34.286 rmmod nvme_tcp 00:14:34.286 rmmod nvme_fabrics 00:14:34.286 rmmod nvme_keyring 00:14:34.286 08:00:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:34.286 08:00:40 -- nvmf/common.sh@123 -- # set -e 00:14:34.286 08:00:40 -- nvmf/common.sh@124 -- # return 0 00:14:34.286 08:00:40 -- nvmf/common.sh@477 -- # '[' -n 76785 ']' 00:14:34.286 08:00:40 -- nvmf/common.sh@478 -- # killprocess 76785 00:14:34.286 08:00:40 -- common/autotest_common.sh@926 -- # '[' -z 76785 ']' 00:14:34.286 08:00:40 -- common/autotest_common.sh@930 -- # kill -0 76785 00:14:34.286 08:00:40 -- common/autotest_common.sh@931 -- # uname 00:14:34.286 08:00:40 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:14:34.286 08:00:40 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76785 00:14:34.286 killing process with pid 76785 00:14:34.286 08:00:40 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:14:34.286 08:00:40 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:14:34.286 08:00:40 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76785' 00:14:34.286 08:00:40 -- common/autotest_common.sh@945 -- # kill 76785 00:14:34.286 [2024-07-13 08:00:40.060397] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:14:34.286 08:00:40 -- common/autotest_common.sh@950 -- # wait 76785 00:14:34.551 08:00:40 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:34.551 08:00:40 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:14:34.551 08:00:40 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:14:34.551 08:00:40 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:34.551 08:00:40 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:14:34.551 08:00:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.551 08:00:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.551 08:00:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.551 08:00:40 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:14:34.551 00:14:34.551 real 0m2.337s 00:14:34.551 user 0m6.714s 00:14:34.551 sys 0m0.586s 00:14:34.551 08:00:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:34.551 ************************************ 00:14:34.551 END TEST nvmf_identify 00:14:34.551 ************************************ 00:14:34.551 08:00:40 -- common/autotest_common.sh@10 -- # set +x 00:14:34.551 08:00:40 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:34.551 08:00:40 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:14:34.551 08:00:40 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:14:34.551 08:00:40 -- common/autotest_common.sh@10 -- # set +x 00:14:34.551 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:14:34.551 ************************************ 00:14:34.551 START TEST nvmf_perf 00:14:34.551 ************************************ 00:14:34.551 08:00:40 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:14:34.809 * Looking for test storage... 00:14:34.809 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:14:34.809 08:00:40 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:14:34.809 08:00:40 -- nvmf/common.sh@7 -- # uname -s 00:14:34.809 08:00:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:34.809 08:00:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:34.809 08:00:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:34.809 08:00:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:34.809 08:00:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:34.809 08:00:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:34.809 08:00:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:34.809 08:00:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:34.809 08:00:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:34.809 08:00:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:34.809 08:00:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:14:34.809 08:00:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:14:34.809 08:00:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:34.809 08:00:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:34.809 08:00:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:14:34.809 08:00:40 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:34.809 08:00:40 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:34.809 08:00:40 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:34.809 08:00:40 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:34.809 08:00:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.809 08:00:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.809 08:00:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.809 08:00:40 -- paths/export.sh@5 -- # export PATH 00:14:34.809 08:00:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:34.809 08:00:40 -- nvmf/common.sh@46 -- # : 0 00:14:34.809 08:00:40 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:34.809 08:00:40 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:34.810 08:00:40 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:34.810 08:00:40 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:34.810 08:00:40 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:34.810 08:00:40 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:34.810 08:00:40 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:34.810 08:00:40 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:34.810 08:00:40 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:34.810 08:00:40 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:34.810 08:00:40 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:14:34.810 08:00:40 -- host/perf.sh@17 -- # nvmftestinit 00:14:34.810 08:00:40 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:14:34.810 08:00:40 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:34.810 08:00:40 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:34.810 08:00:40 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:34.810 08:00:40 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:34.810 08:00:40 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.810 08:00:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:34.810 08:00:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:34.810 08:00:40 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:14:34.810 08:00:40 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:14:34.810 08:00:40 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:14:34.810 08:00:40 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:14:34.810 08:00:40 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:14:34.810 08:00:40 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:14:34.810 08:00:40 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:34.810 08:00:40 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:34.810 08:00:40 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:14:34.810 08:00:40 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:14:34.810 08:00:40 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:14:34.810 08:00:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:14:34.810 08:00:40 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:14:34.810 08:00:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:34.810 08:00:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:14:34.810 08:00:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:14:34.810 08:00:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:14:34.810 08:00:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:14:34.810 08:00:40 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:14:34.810 08:00:40 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:14:34.810 Cannot find device "nvmf_tgt_br" 00:14:34.810 08:00:40 -- nvmf/common.sh@154 -- # true 00:14:34.810 08:00:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:14:34.810 Cannot find device "nvmf_tgt_br2" 00:14:34.810 08:00:40 -- nvmf/common.sh@155 -- # true 00:14:34.810 08:00:40 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:14:34.810 08:00:40 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:14:34.810 Cannot find device "nvmf_tgt_br" 00:14:34.810 08:00:40 -- nvmf/common.sh@157 -- # true 00:14:34.810 08:00:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:14:34.810 Cannot find device "nvmf_tgt_br2" 00:14:34.810 08:00:40 -- nvmf/common.sh@158 -- # true 00:14:34.810 08:00:40 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:14:34.810 08:00:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:14:34.810 08:00:40 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:14:34.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:34.810 08:00:40 -- nvmf/common.sh@161 -- # true 00:14:34.810 08:00:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:14:34.810 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:14:34.810 08:00:40 -- nvmf/common.sh@162 -- # true 00:14:34.810 08:00:40 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:14:34.810 08:00:40 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:14:34.810 08:00:40 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:14:34.810 08:00:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:14:34.810 08:00:40 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:14:34.810 08:00:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:14:34.810 08:00:40 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:14:34.810 08:00:40 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:14:35.069 08:00:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:14:35.069 08:00:40 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:14:35.069 08:00:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:14:35.069 08:00:40 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:14:35.069 08:00:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:14:35.069 08:00:40 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:14:35.069 08:00:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:14:35.069 08:00:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:14:35.069 08:00:40 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:14:35.069 08:00:40 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:14:35.069 08:00:40 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:14:35.069 08:00:40 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:14:35.069 08:00:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:14:35.069 08:00:40 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:14:35.069 08:00:40 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:14:35.069 08:00:40 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:14:35.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:14:35.069 00:14:35.069 --- 10.0.0.2 ping statistics --- 00:14:35.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.069 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:14:35.069 08:00:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:14:35.069 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:14:35.069 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:14:35.069 00:14:35.069 --- 10.0.0.3 ping statistics --- 00:14:35.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.069 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:14:35.069 08:00:40 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:14:35.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:14:35.069 00:14:35.069 --- 10.0.0.1 ping statistics --- 00:14:35.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.069 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:14:35.069 08:00:40 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.069 08:00:40 -- nvmf/common.sh@421 -- # return 0 00:14:35.069 08:00:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:14:35.069 08:00:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.069 08:00:40 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:14:35.069 08:00:40 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:14:35.069 08:00:40 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.069 08:00:40 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:14:35.069 08:00:40 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:14:35.069 08:00:40 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:14:35.069 08:00:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:14:35.069 08:00:40 -- common/autotest_common.sh@712 -- # xtrace_disable 00:14:35.069 08:00:40 -- common/autotest_common.sh@10 -- # set +x 00:14:35.069 08:00:40 -- nvmf/common.sh@469 -- # nvmfpid=76974 00:14:35.069 08:00:40 -- nvmf/common.sh@470 -- # waitforlisten 76974 00:14:35.070 08:00:40 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:35.070 08:00:40 -- common/autotest_common.sh@819 -- # '[' -z 76974 ']' 00:14:35.070 08:00:40 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.070 08:00:40 -- common/autotest_common.sh@824 -- # local max_retries=100 00:14:35.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.070 08:00:40 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.070 08:00:40 -- common/autotest_common.sh@828 -- # xtrace_disable 00:14:35.070 08:00:40 -- common/autotest_common.sh@10 -- # set +x 00:14:35.070 [2024-07-13 08:00:40.834662] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:14:35.070 [2024-07-13 08:00:40.834808] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.329 [2024-07-13 08:00:40.975688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:35.329 [2024-07-13 08:00:41.018655] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:14:35.329 [2024-07-13 08:00:41.018860] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.329 [2024-07-13 08:00:41.018877] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.329 [2024-07-13 08:00:41.018888] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.329 [2024-07-13 08:00:41.018999] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:35.329 [2024-07-13 08:00:41.019236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:35.329 [2024-07-13 08:00:41.019242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.329 [2024-07-13 08:00:41.019092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:36.262 08:00:41 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:14:36.262 08:00:41 -- common/autotest_common.sh@852 -- # return 0 00:14:36.262 08:00:41 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:14:36.262 08:00:41 -- common/autotest_common.sh@718 -- # xtrace_disable 00:14:36.262 08:00:41 -- common/autotest_common.sh@10 -- # set +x 00:14:36.262 08:00:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.262 08:00:41 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:14:36.262 08:00:41 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:14:36.521 08:00:42 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:14:36.521 08:00:42 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:14:36.782 08:00:42 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:06.0 00:14:36.782 08:00:42 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:37.040 08:00:42 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:14:37.040 08:00:42 -- host/perf.sh@33 -- # '[' -n 0000:00:06.0 ']' 00:14:37.040 08:00:42 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:14:37.040 08:00:42 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:14:37.040 08:00:42 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:14:37.040 [2024-07-13 08:00:42.831915] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:37.298 08:00:42 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:37.298 08:00:43 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:37.298 08:00:43 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:37.556 08:00:43 -- host/perf.sh@45 -- # for bdev in $bdevs 00:14:37.556 08:00:43 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:14:37.816 08:00:43 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:38.074 [2024-07-13 08:00:43.749128] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:38.074 08:00:43 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:38.333 08:00:44 -- host/perf.sh@52 -- # '[' -n 0000:00:06.0 ']' 00:14:38.333 08:00:44 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:38.333 08:00:44 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:14:38.333 08:00:44 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:06.0' 00:14:39.708 Initializing NVMe Controllers 00:14:39.708 Attached to NVMe Controller at 0000:00:06.0 [1b36:0010] 00:14:39.708 Associating PCIE (0000:00:06.0) NSID 1 with lcore 0 00:14:39.708 Initialization complete. Launching workers. 00:14:39.708 ======================================================== 00:14:39.708 Latency(us) 00:14:39.708 Device Information : IOPS MiB/s Average min max 00:14:39.708 PCIE (0000:00:06.0) NSID 1 from core 0: 23392.00 91.38 1367.59 355.93 8267.77 00:14:39.708 ======================================================== 00:14:39.708 Total : 23392.00 91.38 1367.59 355.93 8267.77 00:14:39.708 00:14:39.708 08:00:45 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:41.082 Initializing NVMe Controllers 00:14:41.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:41.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:41.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:41.082 Initialization complete. Launching workers. 00:14:41.082 ======================================================== 00:14:41.082 Latency(us) 00:14:41.082 Device Information : IOPS MiB/s Average min max 00:14:41.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3674.74 14.35 271.81 102.31 7177.45 00:14:41.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.76 0.48 8144.12 6072.34 12049.02 00:14:41.082 ======================================================== 00:14:41.082 Total : 3798.50 14.84 528.29 102.31 12049.02 00:14:41.082 00:14:41.082 08:00:46 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:42.016 Initializing NVMe Controllers 00:14:42.016 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:42.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:42.016 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:42.016 Initialization complete. Launching workers. 00:14:42.016 ======================================================== 00:14:42.016 Latency(us) 00:14:42.016 Device Information : IOPS MiB/s Average min max 00:14:42.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8871.93 34.66 3606.53 419.04 10846.14 00:14:42.016 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3935.97 15.37 8169.72 5304.38 16249.33 00:14:42.016 ======================================================== 00:14:42.016 Total : 12807.90 50.03 5008.83 419.04 16249.33 00:14:42.016 00:14:42.273 08:00:47 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:14:42.273 08:00:47 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:44.880 Initializing NVMe Controllers 00:14:44.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:44.880 Controller IO queue size 128, less than required. 00:14:44.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:44.880 Controller IO queue size 128, less than required. 00:14:44.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:44.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:44.880 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:44.880 Initialization complete. Launching workers. 00:14:44.880 ======================================================== 00:14:44.880 Latency(us) 00:14:44.880 Device Information : IOPS MiB/s Average min max 00:14:44.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1713.32 428.33 75527.47 48330.34 181991.18 00:14:44.880 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 608.44 152.11 215862.00 83783.64 337196.15 00:14:44.880 ======================================================== 00:14:44.880 Total : 2321.76 580.44 112303.34 48330.34 337196.15 00:14:44.880 00:14:44.880 08:00:50 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:14:44.880 No valid NVMe controllers or AIO or URING devices found 00:14:44.880 Initializing NVMe Controllers 00:14:44.880 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:44.880 Controller IO queue size 128, less than required. 00:14:44.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:44.880 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:14:44.880 Controller IO queue size 128, less than required. 00:14:44.880 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:44.880 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:14:44.880 WARNING: Some requested NVMe devices were skipped 00:14:44.880 08:00:50 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:14:47.412 Initializing NVMe Controllers 00:14:47.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:47.412 Controller IO queue size 128, less than required. 00:14:47.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:47.412 Controller IO queue size 128, less than required. 00:14:47.412 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:47.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:14:47.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:14:47.412 Initialization complete. Launching workers. 00:14:47.412 00:14:47.412 ==================== 00:14:47.412 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:14:47.412 TCP transport: 00:14:47.412 polls: 7413 00:14:47.412 idle_polls: 0 00:14:47.412 sock_completions: 7413 00:14:47.412 nvme_completions: 7051 00:14:47.412 submitted_requests: 10648 00:14:47.412 queued_requests: 1 00:14:47.412 00:14:47.412 ==================== 00:14:47.412 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:14:47.412 TCP transport: 00:14:47.412 polls: 8049 00:14:47.412 idle_polls: 0 00:14:47.412 sock_completions: 8049 00:14:47.412 nvme_completions: 7019 00:14:47.412 submitted_requests: 10720 00:14:47.412 queued_requests: 1 00:14:47.412 ======================================================== 00:14:47.412 Latency(us) 00:14:47.412 Device Information : IOPS MiB/s Average min max 00:14:47.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1824.34 456.09 71671.80 35071.01 117651.16 00:14:47.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1815.85 453.96 71002.57 35530.66 129213.82 00:14:47.412 ======================================================== 00:14:47.412 Total : 3640.19 910.05 71337.96 35071.01 129213.82 00:14:47.412 00:14:47.412 08:00:52 -- host/perf.sh@66 -- # sync 00:14:47.412 08:00:53 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.670 08:00:53 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:14:47.670 08:00:53 -- host/perf.sh@71 -- # '[' -n 0000:00:06.0 ']' 00:14:47.670 08:00:53 -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:14:47.929 08:00:53 -- host/perf.sh@72 -- # ls_guid=c0dfc667-cab2-494e-b77a-fa3dcc83e3d5 00:14:47.929 08:00:53 -- host/perf.sh@73 -- # get_lvs_free_mb c0dfc667-cab2-494e-b77a-fa3dcc83e3d5 00:14:47.929 08:00:53 -- common/autotest_common.sh@1343 -- # local lvs_uuid=c0dfc667-cab2-494e-b77a-fa3dcc83e3d5 00:14:47.929 08:00:53 -- common/autotest_common.sh@1344 -- # local lvs_info 00:14:47.929 08:00:53 -- common/autotest_common.sh@1345 -- # local fc 00:14:47.929 08:00:53 -- common/autotest_common.sh@1346 -- # local cs 00:14:47.929 08:00:53 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:48.187 08:00:53 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:14:48.187 { 00:14:48.187 "uuid": "c0dfc667-cab2-494e-b77a-fa3dcc83e3d5", 00:14:48.187 "name": "lvs_0", 00:14:48.187 "base_bdev": "Nvme0n1", 00:14:48.187 "total_data_clusters": 1278, 00:14:48.187 "free_clusters": 1278, 00:14:48.187 "block_size": 4096, 00:14:48.187 "cluster_size": 4194304 00:14:48.187 } 00:14:48.187 ]' 00:14:48.187 08:00:53 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="c0dfc667-cab2-494e-b77a-fa3dcc83e3d5") .free_clusters' 00:14:48.187 08:00:53 -- common/autotest_common.sh@1348 -- # fc=1278 00:14:48.187 08:00:53 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="c0dfc667-cab2-494e-b77a-fa3dcc83e3d5") .cluster_size' 00:14:48.187 08:00:53 -- common/autotest_common.sh@1349 -- # cs=4194304 00:14:48.187 08:00:53 -- common/autotest_common.sh@1352 -- # free_mb=5112 00:14:48.187 5112 00:14:48.187 08:00:53 -- common/autotest_common.sh@1353 -- # echo 5112 00:14:48.187 08:00:53 -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:14:48.187 08:00:53 -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c0dfc667-cab2-494e-b77a-fa3dcc83e3d5 lbd_0 5112 00:14:48.444 08:00:54 -- host/perf.sh@80 -- # lb_guid=60d21828-258a-4b13-a147-8b4d895bcdfb 00:14:48.444 08:00:54 -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 60d21828-258a-4b13-a147-8b4d895bcdfb lvs_n_0 00:14:49.010 08:00:54 -- host/perf.sh@83 -- # ls_nested_guid=1a656f10-6e1f-4ea4-b7f5-b6ef405e8183 00:14:49.010 08:00:54 -- host/perf.sh@84 -- # get_lvs_free_mb 1a656f10-6e1f-4ea4-b7f5-b6ef405e8183 00:14:49.010 08:00:54 -- common/autotest_common.sh@1343 -- # local lvs_uuid=1a656f10-6e1f-4ea4-b7f5-b6ef405e8183 00:14:49.010 08:00:54 -- common/autotest_common.sh@1344 -- # local lvs_info 00:14:49.010 08:00:54 -- common/autotest_common.sh@1345 -- # local fc 00:14:49.010 08:00:54 -- common/autotest_common.sh@1346 -- # local cs 00:14:49.010 08:00:54 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:14:49.010 08:00:54 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:14:49.010 { 00:14:49.010 "uuid": "c0dfc667-cab2-494e-b77a-fa3dcc83e3d5", 00:14:49.010 "name": "lvs_0", 00:14:49.010 "base_bdev": "Nvme0n1", 00:14:49.010 "total_data_clusters": 1278, 00:14:49.010 "free_clusters": 0, 00:14:49.010 "block_size": 4096, 00:14:49.010 "cluster_size": 4194304 00:14:49.010 }, 00:14:49.010 { 00:14:49.010 "uuid": "1a656f10-6e1f-4ea4-b7f5-b6ef405e8183", 00:14:49.010 "name": "lvs_n_0", 00:14:49.010 "base_bdev": "60d21828-258a-4b13-a147-8b4d895bcdfb", 00:14:49.010 "total_data_clusters": 1276, 00:14:49.010 "free_clusters": 1276, 00:14:49.010 "block_size": 4096, 00:14:49.010 "cluster_size": 4194304 00:14:49.010 } 00:14:49.010 ]' 00:14:49.010 08:00:54 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="1a656f10-6e1f-4ea4-b7f5-b6ef405e8183") .free_clusters' 00:14:49.010 08:00:54 -- common/autotest_common.sh@1348 -- # fc=1276 00:14:49.010 08:00:54 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="1a656f10-6e1f-4ea4-b7f5-b6ef405e8183") .cluster_size' 00:14:49.267 5104 00:14:49.267 08:00:54 -- common/autotest_common.sh@1349 -- # cs=4194304 00:14:49.267 08:00:54 -- common/autotest_common.sh@1352 -- # free_mb=5104 00:14:49.267 08:00:54 -- common/autotest_common.sh@1353 -- # echo 5104 00:14:49.267 08:00:54 -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:14:49.267 08:00:54 -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 1a656f10-6e1f-4ea4-b7f5-b6ef405e8183 lbd_nest_0 5104 00:14:49.525 08:00:55 -- host/perf.sh@88 -- # lb_nested_guid=b181977f-6153-466b-ae68-c7bc251726ac 00:14:49.525 08:00:55 -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:49.525 08:00:55 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:14:49.525 08:00:55 -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 b181977f-6153-466b-ae68-c7bc251726ac 00:14:49.783 08:00:55 -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:50.041 08:00:55 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:14:50.041 08:00:55 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:14:50.041 08:00:55 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:14:50.041 08:00:55 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:50.041 08:00:55 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:14:50.298 No valid NVMe controllers or AIO or URING devices found 00:14:50.298 Initializing NVMe Controllers 00:14:50.298 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:50.298 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:14:50.298 WARNING: Some requested NVMe devices were skipped 00:14:50.556 08:00:56 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:14:50.556 08:00:56 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:00.532 Initializing NVMe Controllers 00:15:00.532 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:00.532 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:00.532 Initialization complete. Launching workers. 00:15:00.532 ======================================================== 00:15:00.532 Latency(us) 00:15:00.532 Device Information : IOPS MiB/s Average min max 00:15:00.532 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 955.19 119.40 1046.51 330.57 8368.57 00:15:00.532 ======================================================== 00:15:00.532 Total : 955.19 119.40 1046.51 330.57 8368.57 00:15:00.532 00:15:00.532 08:01:06 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:00.532 08:01:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:00.532 08:01:06 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:01.100 No valid NVMe controllers or AIO or URING devices found 00:15:01.100 Initializing NVMe Controllers 00:15:01.100 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:01.100 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:01.100 WARNING: Some requested NVMe devices were skipped 00:15:01.100 08:01:06 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:01.100 08:01:06 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:13.304 Initializing NVMe Controllers 00:15:13.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:13.304 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:13.304 Initialization complete. Launching workers. 00:15:13.304 ======================================================== 00:15:13.304 Latency(us) 00:15:13.304 Device Information : IOPS MiB/s Average min max 00:15:13.304 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1365.18 170.65 23466.26 6309.90 59918.07 00:15:13.304 ======================================================== 00:15:13.304 Total : 1365.18 170.65 23466.26 6309.90 59918.07 00:15:13.304 00:15:13.304 08:01:17 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:15:13.304 08:01:17 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:13.304 08:01:17 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:13.304 No valid NVMe controllers or AIO or URING devices found 00:15:13.304 Initializing NVMe Controllers 00:15:13.304 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:13.304 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:15:13.304 WARNING: Some requested NVMe devices were skipped 00:15:13.304 08:01:17 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:15:13.304 08:01:17 -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:15:23.327 Initializing NVMe Controllers 00:15:23.327 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:23.327 Controller IO queue size 128, less than required. 00:15:23.327 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:23.327 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:15:23.328 Initialization complete. Launching workers. 00:15:23.328 ======================================================== 00:15:23.328 Latency(us) 00:15:23.328 Device Information : IOPS MiB/s Average min max 00:15:23.328 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4051.10 506.39 31658.59 12575.53 73175.32 00:15:23.328 ======================================================== 00:15:23.328 Total : 4051.10 506.39 31658.59 12575.53 73175.32 00:15:23.328 00:15:23.328 08:01:27 -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:23.328 08:01:27 -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete b181977f-6153-466b-ae68-c7bc251726ac 00:15:23.328 08:01:28 -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:23.328 08:01:28 -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 60d21828-258a-4b13-a147-8b4d895bcdfb 00:15:23.328 08:01:28 -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:23.328 08:01:29 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:23.328 08:01:29 -- host/perf.sh@114 -- # nvmftestfini 00:15:23.328 08:01:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:23.328 08:01:29 -- nvmf/common.sh@116 -- # sync 00:15:23.328 08:01:29 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:23.328 08:01:29 -- nvmf/common.sh@119 -- # set +e 00:15:23.328 08:01:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:23.328 08:01:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:23.328 rmmod nvme_tcp 00:15:23.328 rmmod nvme_fabrics 00:15:23.328 rmmod nvme_keyring 00:15:23.328 08:01:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:23.328 08:01:29 -- nvmf/common.sh@123 -- # set -e 00:15:23.328 08:01:29 -- nvmf/common.sh@124 -- # return 0 00:15:23.328 08:01:29 -- nvmf/common.sh@477 -- # '[' -n 76974 ']' 00:15:23.328 08:01:29 -- nvmf/common.sh@478 -- # killprocess 76974 00:15:23.328 08:01:29 -- common/autotest_common.sh@926 -- # '[' -z 76974 ']' 00:15:23.328 08:01:29 -- common/autotest_common.sh@930 -- # kill -0 76974 00:15:23.328 08:01:29 -- common/autotest_common.sh@931 -- # uname 00:15:23.328 08:01:29 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:23.586 08:01:29 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 76974 00:15:23.586 08:01:29 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:23.586 killing process with pid 76974 00:15:23.586 08:01:29 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:23.586 08:01:29 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 76974' 00:15:23.586 08:01:29 -- common/autotest_common.sh@945 -- # kill 76974 00:15:23.586 08:01:29 -- common/autotest_common.sh@950 -- # wait 76974 00:15:24.961 08:01:30 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:24.961 08:01:30 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:24.961 08:01:30 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:24.961 08:01:30 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:24.961 08:01:30 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:24.961 08:01:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.961 08:01:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.961 08:01:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.961 08:01:30 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:24.961 00:15:24.961 real 0m50.086s 00:15:24.961 user 3m7.475s 00:15:24.961 sys 0m13.085s 00:15:24.961 08:01:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:24.961 08:01:30 -- common/autotest_common.sh@10 -- # set +x 00:15:24.961 ************************************ 00:15:24.961 END TEST nvmf_perf 00:15:24.961 ************************************ 00:15:24.961 08:01:30 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:24.961 08:01:30 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:24.961 08:01:30 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:24.961 08:01:30 -- common/autotest_common.sh@10 -- # set +x 00:15:24.961 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:15:24.961 ************************************ 00:15:24.961 START TEST nvmf_fio_host 00:15:24.961 ************************************ 00:15:24.961 08:01:30 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:15:24.961 * Looking for test storage... 00:15:24.961 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:24.961 08:01:30 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:24.961 08:01:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.961 08:01:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.962 08:01:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.962 08:01:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.962 08:01:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.962 08:01:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.962 08:01:30 -- paths/export.sh@5 -- # export PATH 00:15:24.962 08:01:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.962 08:01:30 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:24.962 08:01:30 -- nvmf/common.sh@7 -- # uname -s 00:15:24.962 08:01:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:24.962 08:01:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:24.962 08:01:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:24.962 08:01:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:24.962 08:01:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:24.962 08:01:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:24.962 08:01:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:24.962 08:01:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:24.962 08:01:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:24.962 08:01:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:24.962 08:01:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:15:24.962 08:01:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:15:24.962 08:01:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:24.962 08:01:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:24.962 08:01:30 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:24.962 08:01:30 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:24.962 08:01:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:24.962 08:01:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:24.962 08:01:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:24.962 08:01:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.962 08:01:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.962 08:01:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.962 08:01:30 -- paths/export.sh@5 -- # export PATH 00:15:24.962 08:01:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:24.962 08:01:30 -- nvmf/common.sh@46 -- # : 0 00:15:24.962 08:01:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:24.962 08:01:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:24.962 08:01:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:24.962 08:01:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:24.962 08:01:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:24.962 08:01:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:24.962 08:01:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:24.962 08:01:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:24.962 08:01:30 -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:24.962 08:01:30 -- host/fio.sh@14 -- # nvmftestinit 00:15:24.962 08:01:30 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:24.962 08:01:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:24.962 08:01:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:24.962 08:01:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:24.962 08:01:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:24.962 08:01:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:24.962 08:01:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:24.962 08:01:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:24.962 08:01:30 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:24.962 08:01:30 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:24.962 08:01:30 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:24.962 08:01:30 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:24.962 08:01:30 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:24.962 08:01:30 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:24.962 08:01:30 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:24.962 08:01:30 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:24.962 08:01:30 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:24.962 08:01:30 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:24.962 08:01:30 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:24.962 08:01:30 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:24.962 08:01:30 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:24.962 08:01:30 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:24.962 08:01:30 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:24.962 08:01:30 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:24.962 08:01:30 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:24.962 08:01:30 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:24.962 08:01:30 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:24.962 08:01:30 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:24.962 Cannot find device "nvmf_tgt_br" 00:15:24.962 08:01:30 -- nvmf/common.sh@154 -- # true 00:15:24.962 08:01:30 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:24.962 Cannot find device "nvmf_tgt_br2" 00:15:24.962 08:01:30 -- nvmf/common.sh@155 -- # true 00:15:24.962 08:01:30 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:24.962 08:01:30 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:24.962 Cannot find device "nvmf_tgt_br" 00:15:24.962 08:01:30 -- nvmf/common.sh@157 -- # true 00:15:24.962 08:01:30 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:24.962 Cannot find device "nvmf_tgt_br2" 00:15:24.963 08:01:30 -- nvmf/common.sh@158 -- # true 00:15:24.963 08:01:30 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:24.963 08:01:30 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:24.963 08:01:30 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:24.963 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:24.963 08:01:30 -- nvmf/common.sh@161 -- # true 00:15:24.963 08:01:30 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:24.963 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:24.963 08:01:30 -- nvmf/common.sh@162 -- # true 00:15:24.963 08:01:30 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:24.963 08:01:30 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:24.963 08:01:30 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:24.963 08:01:30 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:24.963 08:01:30 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:24.963 08:01:30 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:24.963 08:01:30 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:24.963 08:01:30 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:24.963 08:01:30 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:24.963 08:01:30 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:24.963 08:01:30 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:24.963 08:01:30 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:24.963 08:01:30 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:24.963 08:01:30 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:25.222 08:01:30 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:25.222 08:01:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:25.222 08:01:30 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:25.222 08:01:30 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:25.222 08:01:30 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:25.222 08:01:30 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:25.222 08:01:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:25.222 08:01:30 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:25.222 08:01:30 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:25.222 08:01:30 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:25.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:15:25.222 00:15:25.222 --- 10.0.0.2 ping statistics --- 00:15:25.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.222 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:15:25.222 08:01:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:25.222 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:25.222 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:15:25.222 00:15:25.222 --- 10.0.0.3 ping statistics --- 00:15:25.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.222 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:15:25.222 08:01:30 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:25.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:15:25.222 00:15:25.222 --- 10.0.0.1 ping statistics --- 00:15:25.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.222 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:15:25.222 08:01:30 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.222 08:01:30 -- nvmf/common.sh@421 -- # return 0 00:15:25.222 08:01:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:25.222 08:01:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.222 08:01:30 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:25.222 08:01:30 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:25.222 08:01:30 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.222 08:01:30 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:25.222 08:01:30 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:25.222 08:01:30 -- host/fio.sh@16 -- # [[ y != y ]] 00:15:25.222 08:01:30 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:15:25.222 08:01:30 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:25.222 08:01:30 -- common/autotest_common.sh@10 -- # set +x 00:15:25.222 08:01:30 -- host/fio.sh@24 -- # nvmfpid=77500 00:15:25.222 08:01:30 -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:25.222 08:01:30 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:25.222 08:01:30 -- host/fio.sh@28 -- # waitforlisten 77500 00:15:25.222 08:01:30 -- common/autotest_common.sh@819 -- # '[' -z 77500 ']' 00:15:25.222 08:01:30 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.222 08:01:30 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:25.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.222 08:01:30 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.222 08:01:30 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:25.222 08:01:30 -- common/autotest_common.sh@10 -- # set +x 00:15:25.222 [2024-07-13 08:01:30.966157] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:25.222 [2024-07-13 08:01:30.966252] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.481 [2024-07-13 08:01:31.104484] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:25.481 [2024-07-13 08:01:31.140919] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:25.481 [2024-07-13 08:01:31.141234] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.481 [2024-07-13 08:01:31.141289] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.481 [2024-07-13 08:01:31.141541] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.481 [2024-07-13 08:01:31.141746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.481 [2024-07-13 08:01:31.141929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.481 [2024-07-13 08:01:31.142121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:25.481 [2024-07-13 08:01:31.142130] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.414 08:01:31 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:26.414 08:01:31 -- common/autotest_common.sh@852 -- # return 0 00:15:26.414 08:01:31 -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:26.414 [2024-07-13 08:01:32.154556] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.414 08:01:32 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:15:26.414 08:01:32 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:26.414 08:01:32 -- common/autotest_common.sh@10 -- # set +x 00:15:26.414 08:01:32 -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:26.673 Malloc1 00:15:26.932 08:01:32 -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:27.189 08:01:32 -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:27.447 08:01:33 -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:27.447 [2024-07-13 08:01:33.249928] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.706 08:01:33 -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:27.966 08:01:33 -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:15:27.966 08:01:33 -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:27.966 08:01:33 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:27.966 08:01:33 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:15:27.966 08:01:33 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:27.966 08:01:33 -- common/autotest_common.sh@1318 -- # local sanitizers 00:15:27.966 08:01:33 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:27.966 08:01:33 -- common/autotest_common.sh@1320 -- # shift 00:15:27.966 08:01:33 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:15:27.966 08:01:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:27.966 08:01:33 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:27.966 08:01:33 -- common/autotest_common.sh@1324 -- # grep libasan 00:15:27.966 08:01:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:27.966 08:01:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:15:27.966 08:01:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:15:27.966 08:01:33 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:27.966 08:01:33 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:27.966 08:01:33 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:27.966 08:01:33 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:15:27.966 08:01:33 -- common/autotest_common.sh@1324 -- # asan_lib= 00:15:27.966 08:01:33 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:15:27.966 08:01:33 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:27.966 08:01:33 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:27.966 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:27.966 fio-3.35 00:15:27.966 Starting 1 thread 00:15:30.496 00:15:30.496 test: (groupid=0, jobs=1): err= 0: pid=77560: Sat Jul 13 08:01:35 2024 00:15:30.496 read: IOPS=9421, BW=36.8MiB/s (38.6MB/s)(73.8MiB/2006msec) 00:15:30.496 slat (nsec): min=1935, max=318521, avg=2639.81, stdev=3128.83 00:15:30.496 clat (usec): min=2549, max=11744, avg=7054.43, stdev=500.69 00:15:30.496 lat (usec): min=2602, max=11746, avg=7057.07, stdev=500.47 00:15:30.496 clat percentiles (usec): 00:15:30.496 | 1.00th=[ 5932], 5.00th=[ 6325], 10.00th=[ 6456], 20.00th=[ 6718], 00:15:30.496 | 30.00th=[ 6849], 40.00th=[ 6915], 50.00th=[ 7046], 60.00th=[ 7177], 00:15:30.496 | 70.00th=[ 7308], 80.00th=[ 7439], 90.00th=[ 7635], 95.00th=[ 7832], 00:15:30.496 | 99.00th=[ 8291], 99.50th=[ 8586], 99.90th=[10028], 99.95th=[11076], 00:15:30.496 | 99.99th=[11731] 00:15:30.496 bw ( KiB/s): min=36680, max=38240, per=99.98%, avg=37678.00, stdev=719.92, samples=4 00:15:30.496 iops : min= 9170, max= 9560, avg=9419.50, stdev=179.98, samples=4 00:15:30.496 write: IOPS=9425, BW=36.8MiB/s (38.6MB/s)(73.9MiB/2006msec); 0 zone resets 00:15:30.496 slat (usec): min=2, max=2168, avg= 2.86, stdev=15.91 00:15:30.496 clat (usec): min=2390, max=11603, avg=6460.71, stdev=470.02 00:15:30.496 lat (usec): min=2403, max=11605, avg=6463.56, stdev=470.19 00:15:30.496 clat percentiles (usec): 00:15:30.496 | 1.00th=[ 5407], 5.00th=[ 5800], 10.00th=[ 5932], 20.00th=[ 6128], 00:15:30.496 | 30.00th=[ 6259], 40.00th=[ 6390], 50.00th=[ 6456], 60.00th=[ 6521], 00:15:30.496 | 70.00th=[ 6652], 80.00th=[ 6783], 90.00th=[ 6980], 95.00th=[ 7111], 00:15:30.496 | 99.00th=[ 7701], 99.50th=[ 8356], 99.90th=[ 9765], 99.95th=[10421], 00:15:30.496 | 99.99th=[11469] 00:15:30.496 bw ( KiB/s): min=37312, max=38016, per=99.94%, avg=37682.00, stdev=305.43, samples=4 00:15:30.496 iops : min= 9328, max= 9504, avg=9420.50, stdev=76.36, samples=4 00:15:30.496 lat (msec) : 4=0.08%, 10=99.83%, 20=0.10% 00:15:30.496 cpu : usr=66.93%, sys=24.09%, ctx=464, majf=0, minf=5 00:15:30.496 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:30.496 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:30.496 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:30.496 issued rwts: total=18899,18908,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:30.496 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:30.496 00:15:30.496 Run status group 0 (all jobs): 00:15:30.496 READ: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=73.8MiB (77.4MB), run=2006-2006msec 00:15:30.496 WRITE: bw=36.8MiB/s (38.6MB/s), 36.8MiB/s-36.8MiB/s (38.6MB/s-38.6MB/s), io=73.9MiB (77.4MB), run=2006-2006msec 00:15:30.496 08:01:36 -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:30.496 08:01:36 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:30.496 08:01:36 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:15:30.496 08:01:36 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:30.496 08:01:36 -- common/autotest_common.sh@1318 -- # local sanitizers 00:15:30.496 08:01:36 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:30.496 08:01:36 -- common/autotest_common.sh@1320 -- # shift 00:15:30.496 08:01:36 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:15:30.496 08:01:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:30.496 08:01:36 -- common/autotest_common.sh@1324 -- # grep libasan 00:15:30.496 08:01:36 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:30.496 08:01:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:30.496 08:01:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:15:30.496 08:01:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:15:30.496 08:01:36 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:30.496 08:01:36 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:30.496 08:01:36 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:30.496 08:01:36 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:15:30.496 08:01:36 -- common/autotest_common.sh@1324 -- # asan_lib= 00:15:30.496 08:01:36 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:15:30.496 08:01:36 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:30.496 08:01:36 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:15:30.496 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:15:30.496 fio-3.35 00:15:30.496 Starting 1 thread 00:15:33.027 00:15:33.027 test: (groupid=0, jobs=1): err= 0: pid=77592: Sat Jul 13 08:01:38 2024 00:15:33.027 read: IOPS=8514, BW=133MiB/s (140MB/s)(267MiB/2004msec) 00:15:33.027 slat (usec): min=2, max=116, avg= 4.01, stdev= 2.24 00:15:33.027 clat (usec): min=2166, max=17546, avg=8172.30, stdev=2484.35 00:15:33.027 lat (usec): min=2180, max=17549, avg=8176.31, stdev=2484.49 00:15:33.027 clat percentiles (usec): 00:15:33.027 | 1.00th=[ 4080], 5.00th=[ 4817], 10.00th=[ 5276], 20.00th=[ 5997], 00:15:33.027 | 30.00th=[ 6587], 40.00th=[ 7242], 50.00th=[ 7767], 60.00th=[ 8455], 00:15:33.027 | 70.00th=[ 9241], 80.00th=[10290], 90.00th=[11600], 95.00th=[12780], 00:15:33.027 | 99.00th=[15270], 99.50th=[16188], 99.90th=[17171], 99.95th=[17433], 00:15:33.027 | 99.99th=[17433] 00:15:33.027 bw ( KiB/s): min=65056, max=76160, per=50.86%, avg=69288.00, stdev=4823.51, samples=4 00:15:33.027 iops : min= 4066, max= 4760, avg=4330.50, stdev=301.47, samples=4 00:15:33.027 write: IOPS=4863, BW=76.0MiB/s (79.7MB/s)(141MiB/1861msec); 0 zone resets 00:15:33.027 slat (usec): min=31, max=348, avg=40.42, stdev= 8.22 00:15:33.027 clat (usec): min=5945, max=18502, avg=12160.32, stdev=1880.17 00:15:33.027 lat (usec): min=5979, max=18540, avg=12200.74, stdev=1881.39 00:15:33.027 clat percentiles (usec): 00:15:33.027 | 1.00th=[ 8455], 5.00th=[ 9241], 10.00th=[ 9765], 20.00th=[10421], 00:15:33.027 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12125], 60.00th=[12518], 00:15:33.027 | 70.00th=[13042], 80.00th=[13566], 90.00th=[14746], 95.00th=[15664], 00:15:33.027 | 99.00th=[16909], 99.50th=[17433], 99.90th=[17957], 99.95th=[18220], 00:15:33.027 | 99.99th=[18482] 00:15:33.027 bw ( KiB/s): min=68192, max=78272, per=92.24%, avg=71776.00, stdev=4453.79, samples=4 00:15:33.027 iops : min= 4262, max= 4892, avg=4486.00, stdev=278.36, samples=4 00:15:33.027 lat (msec) : 4=0.49%, 10=54.64%, 20=44.87% 00:15:33.027 cpu : usr=78.18%, sys=15.63%, ctx=21, majf=0, minf=1 00:15:33.027 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:15:33.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:33.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:33.027 issued rwts: total=17064,9051,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:33.027 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:33.027 00:15:33.027 Run status group 0 (all jobs): 00:15:33.027 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=267MiB (280MB), run=2004-2004msec 00:15:33.027 WRITE: bw=76.0MiB/s (79.7MB/s), 76.0MiB/s-76.0MiB/s (79.7MB/s-79.7MB/s), io=141MiB (148MB), run=1861-1861msec 00:15:33.027 08:01:38 -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:33.027 08:01:38 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:15:33.027 08:01:38 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:15:33.027 08:01:38 -- host/fio.sh@51 -- # get_nvme_bdfs 00:15:33.027 08:01:38 -- common/autotest_common.sh@1498 -- # bdfs=() 00:15:33.027 08:01:38 -- common/autotest_common.sh@1498 -- # local bdfs 00:15:33.027 08:01:38 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:15:33.027 08:01:38 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:15:33.027 08:01:38 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:15:33.027 08:01:38 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:15:33.027 08:01:38 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:15:33.027 08:01:38 -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:06.0 -i 10.0.0.2 00:15:33.593 Nvme0n1 00:15:33.593 08:01:39 -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:15:33.593 08:01:39 -- host/fio.sh@53 -- # ls_guid=26e8775b-ab96-4b1f-8c1e-28b143ab2dd7 00:15:33.593 08:01:39 -- host/fio.sh@54 -- # get_lvs_free_mb 26e8775b-ab96-4b1f-8c1e-28b143ab2dd7 00:15:33.593 08:01:39 -- common/autotest_common.sh@1343 -- # local lvs_uuid=26e8775b-ab96-4b1f-8c1e-28b143ab2dd7 00:15:33.593 08:01:39 -- common/autotest_common.sh@1344 -- # local lvs_info 00:15:33.593 08:01:39 -- common/autotest_common.sh@1345 -- # local fc 00:15:33.593 08:01:39 -- common/autotest_common.sh@1346 -- # local cs 00:15:33.593 08:01:39 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:33.850 08:01:39 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:15:33.850 { 00:15:33.850 "uuid": "26e8775b-ab96-4b1f-8c1e-28b143ab2dd7", 00:15:33.850 "name": "lvs_0", 00:15:33.850 "base_bdev": "Nvme0n1", 00:15:33.850 "total_data_clusters": 4, 00:15:33.850 "free_clusters": 4, 00:15:33.850 "block_size": 4096, 00:15:33.850 "cluster_size": 1073741824 00:15:33.850 } 00:15:33.850 ]' 00:15:33.850 08:01:39 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="26e8775b-ab96-4b1f-8c1e-28b143ab2dd7") .free_clusters' 00:15:33.850 08:01:39 -- common/autotest_common.sh@1348 -- # fc=4 00:15:33.850 08:01:39 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="26e8775b-ab96-4b1f-8c1e-28b143ab2dd7") .cluster_size' 00:15:34.107 4096 00:15:34.107 08:01:39 -- common/autotest_common.sh@1349 -- # cs=1073741824 00:15:34.107 08:01:39 -- common/autotest_common.sh@1352 -- # free_mb=4096 00:15:34.107 08:01:39 -- common/autotest_common.sh@1353 -- # echo 4096 00:15:34.107 08:01:39 -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:15:34.364 04db44ae-22ab-4b6d-ad4d-105ccfd6a7b0 00:15:34.364 08:01:39 -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:15:34.364 08:01:40 -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:15:34.931 08:01:40 -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:15:34.931 08:01:40 -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:34.931 08:01:40 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:34.931 08:01:40 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:15:34.931 08:01:40 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:34.931 08:01:40 -- common/autotest_common.sh@1318 -- # local sanitizers 00:15:34.931 08:01:40 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:34.931 08:01:40 -- common/autotest_common.sh@1320 -- # shift 00:15:34.931 08:01:40 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:15:34.931 08:01:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:34.931 08:01:40 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:34.931 08:01:40 -- common/autotest_common.sh@1324 -- # grep libasan 00:15:34.931 08:01:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:35.189 08:01:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:15:35.189 08:01:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:15:35.189 08:01:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:35.189 08:01:40 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:35.189 08:01:40 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:15:35.189 08:01:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:35.189 08:01:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:15:35.189 08:01:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:15:35.189 08:01:40 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:35.189 08:01:40 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:35.189 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:35.189 fio-3.35 00:15:35.189 Starting 1 thread 00:15:37.720 00:15:37.720 test: (groupid=0, jobs=1): err= 0: pid=77676: Sat Jul 13 08:01:43 2024 00:15:37.720 read: IOPS=6303, BW=24.6MiB/s (25.8MB/s)(49.4MiB/2008msec) 00:15:37.720 slat (usec): min=2, max=311, avg= 2.67, stdev= 3.65 00:15:37.720 clat (usec): min=2883, max=17891, avg=10603.23, stdev=1256.87 00:15:37.720 lat (usec): min=2892, max=17894, avg=10605.90, stdev=1256.64 00:15:37.720 clat percentiles (usec): 00:15:37.720 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9634], 00:15:37.720 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:15:37.720 | 70.00th=[10945], 80.00th=[11469], 90.00th=[12387], 95.00th=[13042], 00:15:37.720 | 99.00th=[14222], 99.50th=[14746], 99.90th=[15795], 99.95th=[15926], 00:15:37.720 | 99.99th=[17695] 00:15:37.720 bw ( KiB/s): min=22331, max=26960, per=99.91%, avg=25192.75, stdev=2049.65, samples=4 00:15:37.720 iops : min= 5582, max= 6740, avg=6298.00, stdev=512.76, samples=4 00:15:37.720 write: IOPS=6298, BW=24.6MiB/s (25.8MB/s)(49.4MiB/2008msec); 0 zone resets 00:15:37.720 slat (usec): min=2, max=130, avg= 2.73, stdev= 1.60 00:15:37.720 clat (usec): min=2259, max=16062, avg=9620.07, stdev=1168.67 00:15:37.720 lat (usec): min=2272, max=16065, avg=9622.79, stdev=1168.57 00:15:37.720 clat percentiles (usec): 00:15:37.720 | 1.00th=[ 7570], 5.00th=[ 8160], 10.00th=[ 8455], 20.00th=[ 8717], 00:15:37.720 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9372], 60.00th=[ 9634], 00:15:37.720 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11207], 95.00th=[11863], 00:15:37.720 | 99.00th=[12911], 99.50th=[13435], 99.90th=[14746], 99.95th=[15664], 00:15:37.720 | 99.99th=[16057] 00:15:37.720 bw ( KiB/s): min=23145, max=26688, per=99.79%, avg=25142.25, stdev=1509.95, samples=4 00:15:37.720 iops : min= 5786, max= 6672, avg=6285.50, stdev=377.60, samples=4 00:15:37.720 lat (msec) : 4=0.09%, 10=52.15%, 20=47.76% 00:15:37.720 cpu : usr=73.14%, sys=21.08%, ctx=6, majf=0, minf=5 00:15:37.720 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:15:37.720 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:37.720 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:37.720 issued rwts: total=12658,12648,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:37.720 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:37.720 00:15:37.720 Run status group 0 (all jobs): 00:15:37.720 READ: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=49.4MiB (51.8MB), run=2008-2008msec 00:15:37.720 WRITE: bw=24.6MiB/s (25.8MB/s), 24.6MiB/s-24.6MiB/s (25.8MB/s-25.8MB/s), io=49.4MiB (51.8MB), run=2008-2008msec 00:15:37.720 08:01:43 -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:15:37.720 08:01:43 -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:15:37.978 08:01:43 -- host/fio.sh@64 -- # ls_nested_guid=efd4bef5-0eee-4594-8a49-7fb63b8c3a14 00:15:37.978 08:01:43 -- host/fio.sh@65 -- # get_lvs_free_mb efd4bef5-0eee-4594-8a49-7fb63b8c3a14 00:15:37.978 08:01:43 -- common/autotest_common.sh@1343 -- # local lvs_uuid=efd4bef5-0eee-4594-8a49-7fb63b8c3a14 00:15:37.978 08:01:43 -- common/autotest_common.sh@1344 -- # local lvs_info 00:15:37.978 08:01:43 -- common/autotest_common.sh@1345 -- # local fc 00:15:37.978 08:01:43 -- common/autotest_common.sh@1346 -- # local cs 00:15:37.978 08:01:43 -- common/autotest_common.sh@1347 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:15:38.236 08:01:43 -- common/autotest_common.sh@1347 -- # lvs_info='[ 00:15:38.236 { 00:15:38.236 "uuid": "26e8775b-ab96-4b1f-8c1e-28b143ab2dd7", 00:15:38.236 "name": "lvs_0", 00:15:38.236 "base_bdev": "Nvme0n1", 00:15:38.236 "total_data_clusters": 4, 00:15:38.236 "free_clusters": 0, 00:15:38.236 "block_size": 4096, 00:15:38.236 "cluster_size": 1073741824 00:15:38.236 }, 00:15:38.236 { 00:15:38.237 "uuid": "efd4bef5-0eee-4594-8a49-7fb63b8c3a14", 00:15:38.237 "name": "lvs_n_0", 00:15:38.237 "base_bdev": "04db44ae-22ab-4b6d-ad4d-105ccfd6a7b0", 00:15:38.237 "total_data_clusters": 1022, 00:15:38.237 "free_clusters": 1022, 00:15:38.237 "block_size": 4096, 00:15:38.237 "cluster_size": 4194304 00:15:38.237 } 00:15:38.237 ]' 00:15:38.237 08:01:43 -- common/autotest_common.sh@1348 -- # jq '.[] | select(.uuid=="efd4bef5-0eee-4594-8a49-7fb63b8c3a14") .free_clusters' 00:15:38.237 08:01:44 -- common/autotest_common.sh@1348 -- # fc=1022 00:15:38.237 08:01:44 -- common/autotest_common.sh@1349 -- # jq '.[] | select(.uuid=="efd4bef5-0eee-4594-8a49-7fb63b8c3a14") .cluster_size' 00:15:38.494 4088 00:15:38.494 08:01:44 -- common/autotest_common.sh@1349 -- # cs=4194304 00:15:38.494 08:01:44 -- common/autotest_common.sh@1352 -- # free_mb=4088 00:15:38.494 08:01:44 -- common/autotest_common.sh@1353 -- # echo 4088 00:15:38.494 08:01:44 -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:15:38.494 992978df-425f-49bd-9aaa-8b7841063557 00:15:38.751 08:01:44 -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:15:39.017 08:01:44 -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:15:39.017 08:01:44 -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:15:39.281 08:01:45 -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:39.281 08:01:45 -- common/autotest_common.sh@1339 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:39.281 08:01:45 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:15:39.281 08:01:45 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:39.281 08:01:45 -- common/autotest_common.sh@1318 -- # local sanitizers 00:15:39.281 08:01:45 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:39.281 08:01:45 -- common/autotest_common.sh@1320 -- # shift 00:15:39.281 08:01:45 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:15:39.281 08:01:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:39.281 08:01:45 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:39.281 08:01:45 -- common/autotest_common.sh@1324 -- # grep libasan 00:15:39.281 08:01:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:39.281 08:01:45 -- common/autotest_common.sh@1324 -- # asan_lib= 00:15:39.281 08:01:45 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:15:39.281 08:01:45 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:15:39.281 08:01:45 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:15:39.281 08:01:45 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:15:39.281 08:01:45 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:15:39.281 08:01:45 -- common/autotest_common.sh@1324 -- # asan_lib= 00:15:39.281 08:01:45 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:15:39.281 08:01:45 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:15:39.281 08:01:45 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:15:39.538 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:15:39.538 fio-3.35 00:15:39.539 Starting 1 thread 00:15:42.080 00:15:42.080 test: (groupid=0, jobs=1): err= 0: pid=77726: Sat Jul 13 08:01:47 2024 00:15:42.080 read: IOPS=5935, BW=23.2MiB/s (24.3MB/s)(46.6MiB/2010msec) 00:15:42.080 slat (usec): min=2, max=307, avg= 2.51, stdev= 3.55 00:15:42.080 clat (usec): min=3066, max=20369, avg=11294.14, stdev=939.69 00:15:42.080 lat (usec): min=3076, max=20371, avg=11296.65, stdev=939.36 00:15:42.080 clat percentiles (usec): 00:15:42.080 | 1.00th=[ 9241], 5.00th=[ 9896], 10.00th=[10290], 20.00th=[10552], 00:15:42.080 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:15:42.080 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12780], 00:15:42.080 | 99.00th=[13304], 99.50th=[13960], 99.90th=[17433], 99.95th=[18744], 00:15:42.080 | 99.99th=[20317] 00:15:42.080 bw ( KiB/s): min=22960, max=24064, per=99.92%, avg=23724.00, stdev=516.25, samples=4 00:15:42.080 iops : min= 5740, max= 6016, avg=5931.00, stdev=129.06, samples=4 00:15:42.080 write: IOPS=5929, BW=23.2MiB/s (24.3MB/s)(46.6MiB/2010msec); 0 zone resets 00:15:42.080 slat (usec): min=2, max=229, avg= 2.61, stdev= 2.34 00:15:42.080 clat (usec): min=2353, max=20416, avg=10212.88, stdev=899.65 00:15:42.080 lat (usec): min=2367, max=20418, avg=10215.49, stdev=899.47 00:15:42.080 clat percentiles (usec): 00:15:42.080 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9503], 00:15:42.080 | 30.00th=[ 9765], 40.00th=[10028], 50.00th=[10159], 60.00th=[10421], 00:15:42.080 | 70.00th=[10683], 80.00th=[10814], 90.00th=[11207], 95.00th=[11469], 00:15:42.080 | 99.00th=[12125], 99.50th=[12518], 99.90th=[17433], 99.95th=[17695], 00:15:42.080 | 99.99th=[20317] 00:15:42.080 bw ( KiB/s): min=23552, max=23880, per=100.00%, avg=23722.00, stdev=139.69, samples=4 00:15:42.080 iops : min= 5888, max= 5970, avg=5930.50, stdev=34.92, samples=4 00:15:42.080 lat (msec) : 4=0.06%, 10=22.65%, 20=77.27%, 50=0.02% 00:15:42.080 cpu : usr=73.97%, sys=20.71%, ctx=30, majf=0, minf=5 00:15:42.080 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:15:42.080 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:42.080 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:15:42.080 issued rwts: total=11931,11919,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:42.080 latency : target=0, window=0, percentile=100.00%, depth=128 00:15:42.080 00:15:42.080 Run status group 0 (all jobs): 00:15:42.080 READ: bw=23.2MiB/s (24.3MB/s), 23.2MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=46.6MiB (48.9MB), run=2010-2010msec 00:15:42.080 WRITE: bw=23.2MiB/s (24.3MB/s), 23.2MiB/s-23.2MiB/s (24.3MB/s-24.3MB/s), io=46.6MiB (48.8MB), run=2010-2010msec 00:15:42.080 08:01:47 -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:15:42.080 08:01:47 -- host/fio.sh@74 -- # sync 00:15:42.080 08:01:47 -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:15:42.339 08:01:48 -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:15:42.597 08:01:48 -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:15:42.856 08:01:48 -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:15:43.114 08:01:48 -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:15:44.049 08:01:49 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:15:44.049 08:01:49 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:15:44.049 08:01:49 -- host/fio.sh@86 -- # nvmftestfini 00:15:44.049 08:01:49 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:44.049 08:01:49 -- nvmf/common.sh@116 -- # sync 00:15:44.049 08:01:49 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:15:44.049 08:01:49 -- nvmf/common.sh@119 -- # set +e 00:15:44.049 08:01:49 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:44.049 08:01:49 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:15:44.049 rmmod nvme_tcp 00:15:44.049 rmmod nvme_fabrics 00:15:44.049 rmmod nvme_keyring 00:15:44.049 08:01:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:44.049 08:01:49 -- nvmf/common.sh@123 -- # set -e 00:15:44.049 08:01:49 -- nvmf/common.sh@124 -- # return 0 00:15:44.049 08:01:49 -- nvmf/common.sh@477 -- # '[' -n 77500 ']' 00:15:44.049 08:01:49 -- nvmf/common.sh@478 -- # killprocess 77500 00:15:44.049 08:01:49 -- common/autotest_common.sh@926 -- # '[' -z 77500 ']' 00:15:44.049 08:01:49 -- common/autotest_common.sh@930 -- # kill -0 77500 00:15:44.049 08:01:49 -- common/autotest_common.sh@931 -- # uname 00:15:44.049 08:01:49 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:15:44.049 08:01:49 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77500 00:15:44.049 killing process with pid 77500 00:15:44.049 08:01:49 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:15:44.049 08:01:49 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:15:44.049 08:01:49 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77500' 00:15:44.049 08:01:49 -- common/autotest_common.sh@945 -- # kill 77500 00:15:44.049 08:01:49 -- common/autotest_common.sh@950 -- # wait 77500 00:15:44.308 08:01:49 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:44.308 08:01:49 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:15:44.308 08:01:49 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:15:44.308 08:01:49 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.308 08:01:49 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:15:44.308 08:01:49 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.308 08:01:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.308 08:01:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.308 08:01:49 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:15:44.308 00:15:44.308 real 0m19.501s 00:15:44.308 user 1m25.644s 00:15:44.308 sys 0m4.394s 00:15:44.308 08:01:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:44.308 08:01:49 -- common/autotest_common.sh@10 -- # set +x 00:15:44.308 ************************************ 00:15:44.308 END TEST nvmf_fio_host 00:15:44.308 ************************************ 00:15:44.308 08:01:49 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:44.308 08:01:49 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:15:44.308 08:01:49 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:15:44.308 08:01:49 -- common/autotest_common.sh@10 -- # set +x 00:15:44.308 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:15:44.308 ************************************ 00:15:44.308 START TEST nvmf_failover 00:15:44.308 ************************************ 00:15:44.308 08:01:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:15:44.308 * Looking for test storage... 00:15:44.308 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:15:44.308 08:01:50 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:44.308 08:01:50 -- nvmf/common.sh@7 -- # uname -s 00:15:44.308 08:01:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:44.308 08:01:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:44.308 08:01:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:44.308 08:01:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:44.308 08:01:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:44.308 08:01:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:44.308 08:01:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:44.308 08:01:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:44.308 08:01:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:44.308 08:01:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:44.308 08:01:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:15:44.308 08:01:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:15:44.309 08:01:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:44.309 08:01:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:44.309 08:01:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:44.309 08:01:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:44.309 08:01:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:44.309 08:01:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:44.309 08:01:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:44.309 08:01:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.309 08:01:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.309 08:01:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.309 08:01:50 -- paths/export.sh@5 -- # export PATH 00:15:44.309 08:01:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:44.309 08:01:50 -- nvmf/common.sh@46 -- # : 0 00:15:44.309 08:01:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:44.309 08:01:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:44.309 08:01:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:44.309 08:01:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:44.309 08:01:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:44.309 08:01:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:44.309 08:01:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:44.309 08:01:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:44.309 08:01:50 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:44.309 08:01:50 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:44.309 08:01:50 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:44.309 08:01:50 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:44.309 08:01:50 -- host/failover.sh@18 -- # nvmftestinit 00:15:44.309 08:01:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:15:44.309 08:01:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:44.309 08:01:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:44.309 08:01:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:44.309 08:01:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:44.309 08:01:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.309 08:01:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.309 08:01:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:44.309 08:01:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:15:44.309 08:01:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:15:44.309 08:01:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:15:44.309 08:01:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:15:44.309 08:01:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:15:44.309 08:01:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:15:44.309 08:01:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:44.309 08:01:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:44.309 08:01:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:15:44.309 08:01:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:15:44.309 08:01:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:15:44.309 08:01:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:15:44.309 08:01:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:15:44.309 08:01:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:44.309 08:01:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:15:44.309 08:01:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:15:44.309 08:01:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:15:44.309 08:01:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:15:44.309 08:01:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:15:44.309 08:01:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:15:44.567 Cannot find device "nvmf_tgt_br" 00:15:44.567 08:01:50 -- nvmf/common.sh@154 -- # true 00:15:44.567 08:01:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:15:44.567 Cannot find device "nvmf_tgt_br2" 00:15:44.567 08:01:50 -- nvmf/common.sh@155 -- # true 00:15:44.567 08:01:50 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:15:44.567 08:01:50 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:15:44.567 Cannot find device "nvmf_tgt_br" 00:15:44.567 08:01:50 -- nvmf/common.sh@157 -- # true 00:15:44.567 08:01:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:15:44.567 Cannot find device "nvmf_tgt_br2" 00:15:44.567 08:01:50 -- nvmf/common.sh@158 -- # true 00:15:44.567 08:01:50 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:15:44.567 08:01:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:15:44.567 08:01:50 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:15:44.567 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:44.567 08:01:50 -- nvmf/common.sh@161 -- # true 00:15:44.567 08:01:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:15:44.567 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:15:44.567 08:01:50 -- nvmf/common.sh@162 -- # true 00:15:44.567 08:01:50 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:15:44.567 08:01:50 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:15:44.567 08:01:50 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:15:44.567 08:01:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:15:44.567 08:01:50 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:15:44.567 08:01:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:15:44.567 08:01:50 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:15:44.568 08:01:50 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:15:44.568 08:01:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:15:44.568 08:01:50 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:15:44.568 08:01:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:15:44.568 08:01:50 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:15:44.568 08:01:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:15:44.568 08:01:50 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:15:44.568 08:01:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:15:44.568 08:01:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:15:44.568 08:01:50 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:15:44.568 08:01:50 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:15:44.568 08:01:50 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:15:44.568 08:01:50 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:15:44.568 08:01:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:15:44.827 08:01:50 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:15:44.827 08:01:50 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:15:44.827 08:01:50 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:15:44.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:44.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.058 ms 00:15:44.827 00:15:44.827 --- 10.0.0.2 ping statistics --- 00:15:44.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.827 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:15:44.827 08:01:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:15:44.827 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:15:44.827 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.045 ms 00:15:44.827 00:15:44.827 --- 10.0.0.3 ping statistics --- 00:15:44.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.827 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:15:44.827 08:01:50 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:15:44.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:44.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:15:44.827 00:15:44.827 --- 10.0.0.1 ping statistics --- 00:15:44.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:44.827 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:15:44.827 08:01:50 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:44.827 08:01:50 -- nvmf/common.sh@421 -- # return 0 00:15:44.827 08:01:50 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:44.827 08:01:50 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:44.827 08:01:50 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:15:44.827 08:01:50 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:15:44.827 08:01:50 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:44.827 08:01:50 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:15:44.827 08:01:50 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:15:44.827 08:01:50 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:15:44.827 08:01:50 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:44.827 08:01:50 -- common/autotest_common.sh@712 -- # xtrace_disable 00:15:44.827 08:01:50 -- common/autotest_common.sh@10 -- # set +x 00:15:44.827 08:01:50 -- nvmf/common.sh@469 -- # nvmfpid=77940 00:15:44.827 08:01:50 -- nvmf/common.sh@470 -- # waitforlisten 77940 00:15:44.827 08:01:50 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:44.827 08:01:50 -- common/autotest_common.sh@819 -- # '[' -z 77940 ']' 00:15:44.827 08:01:50 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:44.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:44.827 08:01:50 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:44.827 08:01:50 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:44.827 08:01:50 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:44.827 08:01:50 -- common/autotest_common.sh@10 -- # set +x 00:15:44.827 [2024-07-13 08:01:50.501358] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:15:44.827 [2024-07-13 08:01:50.501439] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:45.085 [2024-07-13 08:01:50.643240] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:45.085 [2024-07-13 08:01:50.685388] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:45.085 [2024-07-13 08:01:50.685587] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:45.085 [2024-07-13 08:01:50.685604] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:45.085 [2024-07-13 08:01:50.685615] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:45.085 [2024-07-13 08:01:50.686575] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:45.085 [2024-07-13 08:01:50.686766] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:45.085 [2024-07-13 08:01:50.686789] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.019 08:01:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:46.019 08:01:51 -- common/autotest_common.sh@852 -- # return 0 00:15:46.019 08:01:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:46.019 08:01:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:15:46.019 08:01:51 -- common/autotest_common.sh@10 -- # set +x 00:15:46.019 08:01:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.019 08:01:51 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:46.019 [2024-07-13 08:01:51.764889] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.019 08:01:51 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:15:46.277 Malloc0 00:15:46.277 08:01:52 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:46.535 08:01:52 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:46.792 08:01:52 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:47.050 [2024-07-13 08:01:52.696777] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:47.050 08:01:52 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:47.309 [2024-07-13 08:01:52.913008] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:15:47.309 08:01:52 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:47.309 [2024-07-13 08:01:53.113237] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:15:47.568 08:01:53 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:15:47.568 08:01:53 -- host/failover.sh@31 -- # bdevperf_pid=77980 00:15:47.568 08:01:53 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:47.568 08:01:53 -- host/failover.sh@34 -- # waitforlisten 77980 /var/tmp/bdevperf.sock 00:15:47.568 08:01:53 -- common/autotest_common.sh@819 -- # '[' -z 77980 ']' 00:15:47.568 08:01:53 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:47.568 08:01:53 -- common/autotest_common.sh@824 -- # local max_retries=100 00:15:47.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:47.568 08:01:53 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:47.568 08:01:53 -- common/autotest_common.sh@828 -- # xtrace_disable 00:15:47.568 08:01:53 -- common/autotest_common.sh@10 -- # set +x 00:15:48.506 08:01:54 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:15:48.506 08:01:54 -- common/autotest_common.sh@852 -- # return 0 00:15:48.506 08:01:54 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:48.765 NVMe0n1 00:15:48.765 08:01:54 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:49.024 00:15:49.024 08:01:54 -- host/failover.sh@39 -- # run_test_pid=77997 00:15:49.024 08:01:54 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:49.024 08:01:54 -- host/failover.sh@41 -- # sleep 1 00:15:49.983 08:01:55 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:50.242 [2024-07-13 08:01:55.981572] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.242 [2024-07-13 08:01:55.981658] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.242 [2024-07-13 08:01:55.981702] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.242 [2024-07-13 08:01:55.981711] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.242 [2024-07-13 08:01:55.981719] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.242 [2024-07-13 08:01:55.981728] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.242 [2024-07-13 08:01:55.981736] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.242 [2024-07-13 08:01:55.981744] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.242 [2024-07-13 08:01:55.981752] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.242 [2024-07-13 08:01:55.981760] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.242 [2024-07-13 08:01:55.981768] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.242 [2024-07-13 08:01:55.981776] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.242 [2024-07-13 08:01:55.981784] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.243 [2024-07-13 08:01:55.981792] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.243 [2024-07-13 08:01:55.981800] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.243 [2024-07-13 08:01:55.981820] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.243 [2024-07-13 08:01:55.981830] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.243 [2024-07-13 08:01:55.981839] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.243 [2024-07-13 08:01:55.981847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.243 [2024-07-13 08:01:55.981855] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.243 [2024-07-13 08:01:55.981863] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.243 [2024-07-13 08:01:55.981871] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134ee70 is same with the state(5) to be set 00:15:50.243 08:01:55 -- host/failover.sh@45 -- # sleep 3 00:15:53.529 08:01:59 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:15:53.529 00:15:53.529 08:01:59 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:15:54.094 [2024-07-13 08:01:59.610726] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610801] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610815] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610825] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610843] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610852] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610861] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610870] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610878] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610887] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610896] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610905] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610913] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610922] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610949] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610958] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610966] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 [2024-07-13 08:01:59.610975] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f550 is same with the state(5) to be set 00:15:54.094 08:01:59 -- host/failover.sh@50 -- # sleep 3 00:15:57.376 08:02:02 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:57.376 [2024-07-13 08:02:02.894983] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:57.376 08:02:02 -- host/failover.sh@55 -- # sleep 1 00:15:58.307 08:02:03 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:15:58.564 [2024-07-13 08:02:04.162763] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dec0 is same with the state(5) to be set 00:15:58.564 [2024-07-13 08:02:04.162844] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dec0 is same with the state(5) to be set 00:15:58.564 [2024-07-13 08:02:04.162857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dec0 is same with the state(5) to be set 00:15:58.564 [2024-07-13 08:02:04.162866] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dec0 is same with the state(5) to be set 00:15:58.564 [2024-07-13 08:02:04.162875] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dec0 is same with the state(5) to be set 00:15:58.564 [2024-07-13 08:02:04.162883] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dec0 is same with the state(5) to be set 00:15:58.564 [2024-07-13 08:02:04.162908] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dec0 is same with the state(5) to be set 00:15:58.564 [2024-07-13 08:02:04.162918] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dec0 is same with the state(5) to be set 00:15:58.564 [2024-07-13 08:02:04.162926] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dec0 is same with the state(5) to be set 00:15:58.564 [2024-07-13 08:02:04.162935] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dec0 is same with the state(5) to be set 00:15:58.564 [2024-07-13 08:02:04.162944] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dec0 is same with the state(5) to be set 00:15:58.564 [2024-07-13 08:02:04.162954] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dec0 is same with the state(5) to be set 00:15:58.564 [2024-07-13 08:02:04.162962] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dec0 is same with the state(5) to be set 00:15:58.564 [2024-07-13 08:02:04.162971] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dec0 is same with the state(5) to be set 00:15:58.564 [2024-07-13 08:02:04.162980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x129dec0 is same with the state(5) to be set 00:15:58.564 08:02:04 -- host/failover.sh@59 -- # wait 77997 00:16:05.128 0 00:16:05.128 08:02:09 -- host/failover.sh@61 -- # killprocess 77980 00:16:05.128 08:02:09 -- common/autotest_common.sh@926 -- # '[' -z 77980 ']' 00:16:05.128 08:02:09 -- common/autotest_common.sh@930 -- # kill -0 77980 00:16:05.128 08:02:09 -- common/autotest_common.sh@931 -- # uname 00:16:05.128 08:02:09 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:05.128 08:02:09 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77980 00:16:05.128 killing process with pid 77980 00:16:05.128 08:02:09 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:05.128 08:02:09 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:05.128 08:02:09 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77980' 00:16:05.128 08:02:09 -- common/autotest_common.sh@945 -- # kill 77980 00:16:05.128 08:02:09 -- common/autotest_common.sh@950 -- # wait 77980 00:16:05.128 08:02:10 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:05.128 [2024-07-13 08:01:53.170323] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:05.128 [2024-07-13 08:01:53.170449] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77980 ] 00:16:05.128 [2024-07-13 08:01:53.306593] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:05.128 [2024-07-13 08:01:53.345786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:05.128 Running I/O for 15 seconds... 00:16:05.128 [2024-07-13 08:01:55.981960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:123360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:123368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:123376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:123384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:123392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:123400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:123408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:123424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:124080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:124088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:124104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:124128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:123440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:123456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:123464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:123488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:123496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:123504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:123528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:124144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.128 [2024-07-13 08:01:55.982789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.128 [2024-07-13 08:01:55.982832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.128 [2024-07-13 08:01:55.982874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.128 [2024-07-13 08:01:55.982903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.128 [2024-07-13 08:01:55.982929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:124208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.128 [2024-07-13 08:01:55.982956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.128 [2024-07-13 08:01:55.982970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:124216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.982983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.982997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:124224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:124232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.983064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:123544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:123560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:123584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:123600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:123608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:123616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:123624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:123656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.983326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:124256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.983381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.983407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:124280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:124296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.983486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.983518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:124312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.983545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.983624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:124352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:123664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:123688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:123704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:123712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:123736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:123760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:123768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:123816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.983960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.983974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.983986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.984013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:124392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.984086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.984113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:124408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:124416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:124424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.984196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:124432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.984224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:124440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.984286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:124464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.984342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.984384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:124480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.984414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:124488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:124504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.984495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.984521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:124520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:123824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:123832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:123840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:123856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:123864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:123880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:123896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:123904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:124528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.984798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:124536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.984836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.129 [2024-07-13 08:01:55.984865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.129 [2024-07-13 08:01:55.984880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.129 [2024-07-13 08:01:55.984892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.984907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:124560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.984919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.984936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.984949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.984964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.130 [2024-07-13 08:01:55.984976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.984997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.130 [2024-07-13 08:01:55.985010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:124592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.130 [2024-07-13 08:01:55.985091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:123920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:123928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:123936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:123944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:123968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:123992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:124016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:124040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.130 [2024-07-13 08:01:55.985398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.130 [2024-07-13 08:01:55.985425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.130 [2024-07-13 08:01:55.985452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.130 [2024-07-13 08:01:55.985479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:124664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.130 [2024-07-13 08:01:55.985506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.130 [2024-07-13 08:01:55.985532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:124680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.130 [2024-07-13 08:01:55.985559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:124048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:124072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:124112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:124152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.130 [2024-07-13 08:01:55.985755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5f1e0 is same with the state(5) to be set 00:16:05.130 [2024-07-13 08:01:55.985812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.130 [2024-07-13 08:01:55.985823] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.130 [2024-07-13 08:01:55.985834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:124160 len:8 PRP1 0x0 PRP2 0x0 00:16:05.130 [2024-07-13 08:01:55.985849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.985894] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d5f1e0 was disconnected and freed. reset controller. 00:16:05.130 [2024-07-13 08:01:55.985910] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:05.130 [2024-07-13 08:01:55.985964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.130 [2024-07-13 08:01:55.985985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.986000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.130 [2024-07-13 08:01:55.986013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.986027] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.130 [2024-07-13 08:01:55.986040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.986053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.130 [2024-07-13 08:01:55.986066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.130 [2024-07-13 08:01:55.986079] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:05.130 [2024-07-13 08:01:55.986159] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4dea0 (9): Bad file descriptor 00:16:05.130 [2024-07-13 08:01:55.988582] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:05.130 [2024-07-13 08:01:56.017721] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:05.131 [2024-07-13 08:01:59.611044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:114472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:114480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:114488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:114496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:114512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:114528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:114536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:114552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:115184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:115208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:115224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:114560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:114584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:114592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:114616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:114624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:114640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:114680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:114712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:115264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:115272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.131 [2024-07-13 08:01:59.611751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:115280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:115288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:115296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:115304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.131 [2024-07-13 08:01:59.611884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:115312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.131 [2024-07-13 08:01:59.611922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:115320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.131 [2024-07-13 08:01:59.611951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:115328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.611980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.611996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:115336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:115352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.131 [2024-07-13 08:01:59.612066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:115360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.131 [2024-07-13 08:01:59.612094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:115368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.131 [2024-07-13 08:01:59.612124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:115376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:115384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:115392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.131 [2024-07-13 08:01:59.612210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:115400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:115408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:115416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:114720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:114728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:114744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:114752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:114760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:114800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:114824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:114832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:115424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.131 [2024-07-13 08:01:59.612591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:115432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:115440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.131 [2024-07-13 08:01:59.612649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:115448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.131 [2024-07-13 08:01:59.612687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:115456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.131 [2024-07-13 08:01:59.612717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:115464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:115472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.131 [2024-07-13 08:01:59.612789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:115480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.131 [2024-07-13 08:01:59.612811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.612829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:115488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.612842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.612858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:115496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.612871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.612887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:115504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.612900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.612916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.612930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.612946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:115520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.612959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.612974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:115528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.612987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:115536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.613016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:114840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:114848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:114864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:114880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:114888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:114896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:114904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:114912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:115544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.613288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:115552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.613316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:114920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:114928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:114936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:114944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:114960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:114976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:114984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:115016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:115560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.613587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:115568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:115576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:115584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:115592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.613702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:115600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:115608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:115616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:115624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.613841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:115632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:115640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.613900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:115648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:115656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.613958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.613973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:115664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.613989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:115672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.614018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:115040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.614047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:115064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.614076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:115072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.614120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:115080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.614150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:115096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.614178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:115144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.614214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:115152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.614245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:115160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.614273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:115680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.614302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.614331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:115696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.614360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:115704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.614389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:115712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.614417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:115720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.614458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:115728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.614490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:115736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.614519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:115744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.614548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:115752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.614577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:115760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.614613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:115768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.614644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:115776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.614672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:115784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.614701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:115792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.132 [2024-07-13 08:01:59.614730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.132 [2024-07-13 08:01:59.614745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:115800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.132 [2024-07-13 08:01:59.614759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:01:59.614784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:115168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:01:59.614800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:01:59.614816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:115176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:01:59.614830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:01:59.614845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:115192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:01:59.614858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:01:59.614874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:115200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:01:59.614887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:01:59.614902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:115216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:01:59.614916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:01:59.614931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:115232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:01:59.614944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:01:59.614960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:115248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:01:59.614975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:01:59.615008] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5fe70 is same with the state(5) to be set 00:16:05.133 [2024-07-13 08:01:59.615025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.133 [2024-07-13 08:01:59.615036] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.133 [2024-07-13 08:01:59.615046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:115256 len:8 PRP1 0x0 PRP2 0x0 00:16:05.133 [2024-07-13 08:01:59.615060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:01:59.615107] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d5fe70 was disconnected and freed. reset controller. 00:16:05.133 [2024-07-13 08:01:59.615125] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:16:05.133 [2024-07-13 08:01:59.615181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.133 [2024-07-13 08:01:59.615203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:01:59.615218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.133 [2024-07-13 08:01:59.615232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:01:59.615246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.133 [2024-07-13 08:01:59.615259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:01:59.615274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.133 [2024-07-13 08:01:59.615287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:01:59.615301] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:05.133 [2024-07-13 08:01:59.617934] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:05.133 [2024-07-13 08:01:59.617975] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4dea0 (9): Bad file descriptor 00:16:05.133 [2024-07-13 08:01:59.650796] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:05.133 [2024-07-13 08:02:04.163048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.133 [2024-07-13 08:02:04.163604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.133 [2024-07-13 08:02:04.163635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.133 [2024-07-13 08:02:04.163663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.133 [2024-07-13 08:02:04.163853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.133 [2024-07-13 08:02:04.163918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.163975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.163999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.164013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.164042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.164071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.164101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.164130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.164158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.164187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.133 [2024-07-13 08:02:04.164230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.133 [2024-07-13 08:02:04.164275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.133 [2024-07-13 08:02:04.164304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.133 [2024-07-13 08:02:04.164332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.164361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.133 [2024-07-13 08:02:04.164396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.164429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.133 [2024-07-13 08:02:04.164458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.133 [2024-07-13 08:02:04.164487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.133 [2024-07-13 08:02:04.164546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.164590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.164618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.164646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.164675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.164703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.133 [2024-07-13 08:02:04.164718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.133 [2024-07-13 08:02:04.164731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.164746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.164759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.164774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.164801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.164831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.164865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.164897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.164936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.164953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.164967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.164982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:40272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.164996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.165025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:40288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.165053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:40296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:40304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:40312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.165140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.165169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:40328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.165199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:40344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.165259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:40352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:40360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.165337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:40368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:40376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:40384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.165793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:40408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:40416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.165882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:40432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.165965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.165980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.165993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:40448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.166050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:40472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.166138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:40480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.166418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:40488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.166447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:40504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:40512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:40520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.166575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:40536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:40544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:40552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:40560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:40568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.166816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:40576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.134 [2024-07-13 08:02:04.166845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.134 [2024-07-13 08:02:04.166861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:40584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.134 [2024-07-13 08:02:04.166888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.166906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:40592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.135 [2024-07-13 08:02:04.166919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.166935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:40600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.135 [2024-07-13 08:02:04.166949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.166964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:40608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.135 [2024-07-13 08:02:04.166978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.166993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:40616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.135 [2024-07-13 08:02:04.167007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.167022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:40624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.135 [2024-07-13 08:02:04.167043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.167059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:16:05.135 [2024-07-13 08:02:04.167073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.167088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:40640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.135 [2024-07-13 08:02:04.167101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.167117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.135 [2024-07-13 08:02:04.167130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.167146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.135 [2024-07-13 08:02:04.167160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.167175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.135 [2024-07-13 08:02:04.167188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.167204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:40016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.135 [2024-07-13 08:02:04.167217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.167233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.135 [2024-07-13 08:02:04.167246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.167261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.135 [2024-07-13 08:02:04.167274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.167290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:05.135 [2024-07-13 08:02:04.167306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.167322] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d51190 is same with the state(5) to be set 00:16:05.135 [2024-07-13 08:02:04.167338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:16:05.135 [2024-07-13 08:02:04.167349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:16:05.135 [2024-07-13 08:02:04.167360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40064 len:8 PRP1 0x0 PRP2 0x0 00:16:05.135 [2024-07-13 08:02:04.167376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.167424] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d51190 was disconnected and freed. reset controller. 00:16:05.135 [2024-07-13 08:02:04.167441] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:16:05.135 [2024-07-13 08:02:04.167503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.135 [2024-07-13 08:02:04.167525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.167540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.135 [2024-07-13 08:02:04.167554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.167568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.135 [2024-07-13 08:02:04.167581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.167595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:05.135 [2024-07-13 08:02:04.167608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:05.135 [2024-07-13 08:02:04.167622] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:05.135 [2024-07-13 08:02:04.170644] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:05.135 [2024-07-13 08:02:04.170699] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d4dea0 (9): Bad file descriptor 00:16:05.135 [2024-07-13 08:02:04.192799] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:05.135 00:16:05.135 Latency(us) 00:16:05.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.135 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:05.135 Verification LBA range: start 0x0 length 0x4000 00:16:05.135 NVMe0n1 : 15.01 12526.89 48.93 276.45 0.00 9977.95 444.97 15013.70 00:16:05.135 =================================================================================================================== 00:16:05.135 Total : 12526.89 48.93 276.45 0.00 9977.95 444.97 15013.70 00:16:05.135 Received shutdown signal, test time was about 15.000000 seconds 00:16:05.135 00:16:05.135 Latency(us) 00:16:05.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:05.135 =================================================================================================================== 00:16:05.135 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:05.135 08:02:10 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:16:05.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:05.135 08:02:10 -- host/failover.sh@65 -- # count=3 00:16:05.135 08:02:10 -- host/failover.sh@67 -- # (( count != 3 )) 00:16:05.135 08:02:10 -- host/failover.sh@73 -- # bdevperf_pid=78079 00:16:05.135 08:02:10 -- host/failover.sh@75 -- # waitforlisten 78079 /var/tmp/bdevperf.sock 00:16:05.135 08:02:10 -- common/autotest_common.sh@819 -- # '[' -z 78079 ']' 00:16:05.135 08:02:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:05.135 08:02:10 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:16:05.135 08:02:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:05.135 08:02:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:05.135 08:02:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:05.135 08:02:10 -- common/autotest_common.sh@10 -- # set +x 00:16:05.394 08:02:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:05.394 08:02:11 -- common/autotest_common.sh@852 -- # return 0 00:16:05.394 08:02:11 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:16:05.652 [2024-07-13 08:02:11.325571] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:05.652 08:02:11 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:16:05.911 [2024-07-13 08:02:11.565846] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:16:05.911 08:02:11 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:06.170 NVMe0n1 00:16:06.170 08:02:11 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:06.429 00:16:06.429 08:02:12 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:06.688 00:16:06.688 08:02:12 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:06.688 08:02:12 -- host/failover.sh@82 -- # grep -q NVMe0 00:16:06.947 08:02:12 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:07.204 08:02:12 -- host/failover.sh@87 -- # sleep 3 00:16:10.489 08:02:16 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:10.489 08:02:16 -- host/failover.sh@88 -- # grep -q NVMe0 00:16:10.489 08:02:16 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:10.489 08:02:16 -- host/failover.sh@90 -- # run_test_pid=78125 00:16:10.489 08:02:16 -- host/failover.sh@92 -- # wait 78125 00:16:11.864 0 00:16:11.864 08:02:17 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:11.864 [2024-07-13 08:02:10.092052] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:11.864 [2024-07-13 08:02:10.092150] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78079 ] 00:16:11.864 [2024-07-13 08:02:10.231596] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:11.864 [2024-07-13 08:02:10.271496] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:11.864 [2024-07-13 08:02:12.979263] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:16:11.864 [2024-07-13 08:02:12.979372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.864 [2024-07-13 08:02:12.979398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.864 [2024-07-13 08:02:12.979417] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.864 [2024-07-13 08:02:12.979432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.864 [2024-07-13 08:02:12.979455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.864 [2024-07-13 08:02:12.979472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.864 [2024-07-13 08:02:12.979486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:11.864 [2024-07-13 08:02:12.979499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:11.864 [2024-07-13 08:02:12.979512] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:16:11.864 [2024-07-13 08:02:12.979561] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:16:11.864 [2024-07-13 08:02:12.979621] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11abea0 (9): Bad file descriptor 00:16:11.864 [2024-07-13 08:02:12.990986] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:11.864 Running I/O for 1 seconds... 00:16:11.864 00:16:11.864 Latency(us) 00:16:11.864 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.864 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:11.864 Verification LBA range: start 0x0 length 0x4000 00:16:11.864 NVMe0n1 : 1.01 13191.27 51.53 0.00 0.00 9656.10 1057.51 11915.64 00:16:11.864 =================================================================================================================== 00:16:11.864 Total : 13191.27 51.53 0.00 0.00 9656.10 1057.51 11915.64 00:16:11.864 08:02:17 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:11.864 08:02:17 -- host/failover.sh@95 -- # grep -q NVMe0 00:16:11.864 08:02:17 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:12.149 08:02:17 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:12.149 08:02:17 -- host/failover.sh@99 -- # grep -q NVMe0 00:16:12.416 08:02:18 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:12.675 08:02:18 -- host/failover.sh@101 -- # sleep 3 00:16:15.957 08:02:21 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:16:15.957 08:02:21 -- host/failover.sh@103 -- # grep -q NVMe0 00:16:15.957 08:02:21 -- host/failover.sh@108 -- # killprocess 78079 00:16:15.957 08:02:21 -- common/autotest_common.sh@926 -- # '[' -z 78079 ']' 00:16:15.957 08:02:21 -- common/autotest_common.sh@930 -- # kill -0 78079 00:16:15.957 08:02:21 -- common/autotest_common.sh@931 -- # uname 00:16:15.957 08:02:21 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:15.957 08:02:21 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78079 00:16:15.957 killing process with pid 78079 00:16:15.957 08:02:21 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:15.957 08:02:21 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:15.957 08:02:21 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78079' 00:16:15.957 08:02:21 -- common/autotest_common.sh@945 -- # kill 78079 00:16:15.957 08:02:21 -- common/autotest_common.sh@950 -- # wait 78079 00:16:16.214 08:02:21 -- host/failover.sh@110 -- # sync 00:16:16.214 08:02:21 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:16.472 08:02:22 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:16:16.472 08:02:22 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:16:16.472 08:02:22 -- host/failover.sh@116 -- # nvmftestfini 00:16:16.472 08:02:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:16.472 08:02:22 -- nvmf/common.sh@116 -- # sync 00:16:16.472 08:02:22 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:16.472 08:02:22 -- nvmf/common.sh@119 -- # set +e 00:16:16.472 08:02:22 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:16.472 08:02:22 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:16.472 rmmod nvme_tcp 00:16:16.472 rmmod nvme_fabrics 00:16:16.472 rmmod nvme_keyring 00:16:16.472 08:02:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:16.472 08:02:22 -- nvmf/common.sh@123 -- # set -e 00:16:16.472 08:02:22 -- nvmf/common.sh@124 -- # return 0 00:16:16.472 08:02:22 -- nvmf/common.sh@477 -- # '[' -n 77940 ']' 00:16:16.472 08:02:22 -- nvmf/common.sh@478 -- # killprocess 77940 00:16:16.472 08:02:22 -- common/autotest_common.sh@926 -- # '[' -z 77940 ']' 00:16:16.472 08:02:22 -- common/autotest_common.sh@930 -- # kill -0 77940 00:16:16.472 08:02:22 -- common/autotest_common.sh@931 -- # uname 00:16:16.472 08:02:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:16.472 08:02:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 77940 00:16:16.472 killing process with pid 77940 00:16:16.472 08:02:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:16.472 08:02:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:16.472 08:02:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 77940' 00:16:16.472 08:02:22 -- common/autotest_common.sh@945 -- # kill 77940 00:16:16.472 08:02:22 -- common/autotest_common.sh@950 -- # wait 77940 00:16:16.730 08:02:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:16.730 08:02:22 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:16.730 08:02:22 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:16.730 08:02:22 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:16.730 08:02:22 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:16.730 08:02:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.730 08:02:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.730 08:02:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.730 08:02:22 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:16.730 00:16:16.730 real 0m32.421s 00:16:16.730 user 2m6.087s 00:16:16.730 sys 0m5.388s 00:16:16.730 08:02:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:16.730 ************************************ 00:16:16.730 08:02:22 -- common/autotest_common.sh@10 -- # set +x 00:16:16.730 END TEST nvmf_failover 00:16:16.730 ************************************ 00:16:16.730 08:02:22 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:16.730 08:02:22 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:16.730 08:02:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:16.730 08:02:22 -- common/autotest_common.sh@10 -- # set +x 00:16:16.730 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:16:16.730 ************************************ 00:16:16.730 START TEST nvmf_discovery 00:16:16.730 ************************************ 00:16:16.730 08:02:22 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:16:16.730 * Looking for test storage... 00:16:16.730 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:16.731 08:02:22 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:16.731 08:02:22 -- nvmf/common.sh@7 -- # uname -s 00:16:16.731 08:02:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:16.731 08:02:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:16.731 08:02:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:16.731 08:02:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:16.731 08:02:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:16.731 08:02:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:16.731 08:02:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:16.731 08:02:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:16.731 08:02:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:16.731 08:02:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:16.990 08:02:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:16:16.990 08:02:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:16:16.990 08:02:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:16.990 08:02:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:16.990 08:02:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:16.990 08:02:22 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:16.990 08:02:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:16.990 08:02:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:16.990 08:02:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:16.990 08:02:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.990 08:02:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.990 08:02:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.990 08:02:22 -- paths/export.sh@5 -- # export PATH 00:16:16.990 08:02:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:16.990 08:02:22 -- nvmf/common.sh@46 -- # : 0 00:16:16.990 08:02:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:16.990 08:02:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:16.990 08:02:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:16.990 08:02:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:16.990 08:02:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:16.990 08:02:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:16.990 08:02:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:16.990 08:02:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:16.990 08:02:22 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:16:16.990 08:02:22 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:16:16.990 08:02:22 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:16:16.990 08:02:22 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:16:16.990 08:02:22 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:16:16.990 08:02:22 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:16:16.990 08:02:22 -- host/discovery.sh@25 -- # nvmftestinit 00:16:16.990 08:02:22 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:16.990 08:02:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:16.990 08:02:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:16.990 08:02:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:16.990 08:02:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:16.990 08:02:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:16.990 08:02:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:16.990 08:02:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:16.990 08:02:22 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:16.990 08:02:22 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:16.990 08:02:22 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:16.990 08:02:22 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:16.990 08:02:22 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:16.990 08:02:22 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:16.990 08:02:22 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:16.990 08:02:22 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:16.990 08:02:22 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:16.990 08:02:22 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:16.990 08:02:22 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:16.990 08:02:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:16.990 08:02:22 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:16.990 08:02:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:16.990 08:02:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:16.990 08:02:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:16.990 08:02:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:16.990 08:02:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:16.990 08:02:22 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:16.990 08:02:22 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:16.990 Cannot find device "nvmf_tgt_br" 00:16:16.990 08:02:22 -- nvmf/common.sh@154 -- # true 00:16:16.990 08:02:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:16.990 Cannot find device "nvmf_tgt_br2" 00:16:16.990 08:02:22 -- nvmf/common.sh@155 -- # true 00:16:16.990 08:02:22 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:16.990 08:02:22 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:16.990 Cannot find device "nvmf_tgt_br" 00:16:16.990 08:02:22 -- nvmf/common.sh@157 -- # true 00:16:16.990 08:02:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:16.990 Cannot find device "nvmf_tgt_br2" 00:16:16.990 08:02:22 -- nvmf/common.sh@158 -- # true 00:16:16.990 08:02:22 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:16.990 08:02:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:16.990 08:02:22 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:16.990 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.990 08:02:22 -- nvmf/common.sh@161 -- # true 00:16:16.990 08:02:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:16.990 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:16.990 08:02:22 -- nvmf/common.sh@162 -- # true 00:16:16.990 08:02:22 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:16.990 08:02:22 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:16.990 08:02:22 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:16.990 08:02:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:16.990 08:02:22 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:16.990 08:02:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:16.990 08:02:22 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:16.990 08:02:22 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:16.990 08:02:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:16.990 08:02:22 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:16.990 08:02:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:16.990 08:02:22 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:16.990 08:02:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:16.990 08:02:22 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:17.248 08:02:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:17.248 08:02:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:17.248 08:02:22 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:17.248 08:02:22 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:17.248 08:02:22 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:17.248 08:02:22 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:17.248 08:02:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:17.248 08:02:22 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:17.248 08:02:22 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:17.248 08:02:22 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:17.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:17.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:16:17.248 00:16:17.248 --- 10.0.0.2 ping statistics --- 00:16:17.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.248 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:16:17.248 08:02:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:17.248 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:17.248 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:16:17.248 00:16:17.248 --- 10.0.0.3 ping statistics --- 00:16:17.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.248 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:16:17.248 08:02:22 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:17.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:17.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:16:17.248 00:16:17.248 --- 10.0.0.1 ping statistics --- 00:16:17.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:17.248 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:16:17.248 08:02:22 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:17.248 08:02:22 -- nvmf/common.sh@421 -- # return 0 00:16:17.248 08:02:22 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:17.248 08:02:22 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:17.248 08:02:22 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:17.248 08:02:22 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:17.248 08:02:22 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:17.248 08:02:22 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:17.248 08:02:22 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:17.248 08:02:22 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:16:17.248 08:02:22 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:17.248 08:02:22 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:17.248 08:02:22 -- common/autotest_common.sh@10 -- # set +x 00:16:17.248 08:02:22 -- nvmf/common.sh@469 -- # nvmfpid=78350 00:16:17.248 08:02:22 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:17.248 08:02:22 -- nvmf/common.sh@470 -- # waitforlisten 78350 00:16:17.248 08:02:22 -- common/autotest_common.sh@819 -- # '[' -z 78350 ']' 00:16:17.248 08:02:22 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.248 08:02:22 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:17.248 08:02:22 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.248 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.248 08:02:22 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:17.248 08:02:22 -- common/autotest_common.sh@10 -- # set +x 00:16:17.248 [2024-07-13 08:02:22.956204] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:17.248 [2024-07-13 08:02:22.956308] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.505 [2024-07-13 08:02:23.097062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.505 [2024-07-13 08:02:23.135652] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:17.505 [2024-07-13 08:02:23.135836] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.505 [2024-07-13 08:02:23.135853] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.505 [2024-07-13 08:02:23.135865] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.505 [2024-07-13 08:02:23.135893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:18.442 08:02:23 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:18.442 08:02:23 -- common/autotest_common.sh@852 -- # return 0 00:16:18.442 08:02:23 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:18.442 08:02:23 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:18.442 08:02:23 -- common/autotest_common.sh@10 -- # set +x 00:16:18.442 08:02:23 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:18.442 08:02:23 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:18.442 08:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.442 08:02:23 -- common/autotest_common.sh@10 -- # set +x 00:16:18.442 [2024-07-13 08:02:23.952712] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:18.442 08:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.442 08:02:23 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:16:18.442 08:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.442 08:02:23 -- common/autotest_common.sh@10 -- # set +x 00:16:18.442 [2024-07-13 08:02:23.960899] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:18.442 08:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.442 08:02:23 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:16:18.442 08:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.442 08:02:23 -- common/autotest_common.sh@10 -- # set +x 00:16:18.442 null0 00:16:18.442 08:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.442 08:02:23 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:16:18.442 08:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.442 08:02:23 -- common/autotest_common.sh@10 -- # set +x 00:16:18.442 null1 00:16:18.442 08:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.442 08:02:23 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:16:18.442 08:02:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:18.442 08:02:23 -- common/autotest_common.sh@10 -- # set +x 00:16:18.442 08:02:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:18.442 08:02:23 -- host/discovery.sh@45 -- # hostpid=78376 00:16:18.442 08:02:23 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:16:18.442 08:02:23 -- host/discovery.sh@46 -- # waitforlisten 78376 /tmp/host.sock 00:16:18.442 08:02:23 -- common/autotest_common.sh@819 -- # '[' -z 78376 ']' 00:16:18.442 08:02:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:16:18.442 08:02:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:18.442 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:18.442 08:02:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:18.442 08:02:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:18.442 08:02:23 -- common/autotest_common.sh@10 -- # set +x 00:16:18.442 [2024-07-13 08:02:24.041805] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:18.442 [2024-07-13 08:02:24.041892] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78376 ] 00:16:18.442 [2024-07-13 08:02:24.181886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.443 [2024-07-13 08:02:24.220128] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:18.443 [2024-07-13 08:02:24.220296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.377 08:02:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:19.377 08:02:24 -- common/autotest_common.sh@852 -- # return 0 00:16:19.377 08:02:24 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:19.377 08:02:24 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:16:19.377 08:02:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.377 08:02:24 -- common/autotest_common.sh@10 -- # set +x 00:16:19.377 08:02:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.377 08:02:24 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:16:19.377 08:02:24 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.377 08:02:24 -- common/autotest_common.sh@10 -- # set +x 00:16:19.377 08:02:24 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.377 08:02:24 -- host/discovery.sh@72 -- # notify_id=0 00:16:19.377 08:02:24 -- host/discovery.sh@78 -- # get_subsystem_names 00:16:19.377 08:02:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:19.377 08:02:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:19.377 08:02:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.377 08:02:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.377 08:02:25 -- host/discovery.sh@59 -- # sort 00:16:19.377 08:02:25 -- host/discovery.sh@59 -- # xargs 00:16:19.377 08:02:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.377 08:02:25 -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:16:19.377 08:02:25 -- host/discovery.sh@79 -- # get_bdev_list 00:16:19.377 08:02:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.377 08:02:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.377 08:02:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.377 08:02:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.377 08:02:25 -- host/discovery.sh@55 -- # sort 00:16:19.377 08:02:25 -- host/discovery.sh@55 -- # xargs 00:16:19.377 08:02:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.377 08:02:25 -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:16:19.377 08:02:25 -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:16:19.377 08:02:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.377 08:02:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.377 08:02:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.377 08:02:25 -- host/discovery.sh@82 -- # get_subsystem_names 00:16:19.377 08:02:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:19.377 08:02:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.377 08:02:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.377 08:02:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:19.377 08:02:25 -- host/discovery.sh@59 -- # sort 00:16:19.377 08:02:25 -- host/discovery.sh@59 -- # xargs 00:16:19.377 08:02:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.377 08:02:25 -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:16:19.377 08:02:25 -- host/discovery.sh@83 -- # get_bdev_list 00:16:19.377 08:02:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.377 08:02:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.377 08:02:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.377 08:02:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.377 08:02:25 -- host/discovery.sh@55 -- # sort 00:16:19.377 08:02:25 -- host/discovery.sh@55 -- # xargs 00:16:19.635 08:02:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.635 08:02:25 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:16:19.635 08:02:25 -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:16:19.635 08:02:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.635 08:02:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.635 08:02:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.635 08:02:25 -- host/discovery.sh@86 -- # get_subsystem_names 00:16:19.635 08:02:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:19.635 08:02:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:19.635 08:02:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.635 08:02:25 -- host/discovery.sh@59 -- # sort 00:16:19.635 08:02:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.635 08:02:25 -- host/discovery.sh@59 -- # xargs 00:16:19.635 08:02:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.635 08:02:25 -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:16:19.635 08:02:25 -- host/discovery.sh@87 -- # get_bdev_list 00:16:19.635 08:02:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.635 08:02:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.635 08:02:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.635 08:02:25 -- host/discovery.sh@55 -- # sort 00:16:19.635 08:02:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.635 08:02:25 -- host/discovery.sh@55 -- # xargs 00:16:19.635 08:02:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.635 08:02:25 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:16:19.635 08:02:25 -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:19.635 08:02:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.635 08:02:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.635 [2024-07-13 08:02:25.357295] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:19.635 08:02:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.635 08:02:25 -- host/discovery.sh@92 -- # get_subsystem_names 00:16:19.635 08:02:25 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:19.635 08:02:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.635 08:02:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.635 08:02:25 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:19.635 08:02:25 -- host/discovery.sh@59 -- # sort 00:16:19.635 08:02:25 -- host/discovery.sh@59 -- # xargs 00:16:19.635 08:02:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.635 08:02:25 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:16:19.635 08:02:25 -- host/discovery.sh@93 -- # get_bdev_list 00:16:19.635 08:02:25 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:19.635 08:02:25 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:19.635 08:02:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.635 08:02:25 -- host/discovery.sh@55 -- # sort 00:16:19.635 08:02:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.635 08:02:25 -- host/discovery.sh@55 -- # xargs 00:16:19.635 08:02:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.893 08:02:25 -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:16:19.893 08:02:25 -- host/discovery.sh@94 -- # get_notification_count 00:16:19.893 08:02:25 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:19.893 08:02:25 -- host/discovery.sh@74 -- # jq '. | length' 00:16:19.893 08:02:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.893 08:02:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.893 08:02:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.893 08:02:25 -- host/discovery.sh@74 -- # notification_count=0 00:16:19.893 08:02:25 -- host/discovery.sh@75 -- # notify_id=0 00:16:19.893 08:02:25 -- host/discovery.sh@95 -- # [[ 0 == 0 ]] 00:16:19.893 08:02:25 -- host/discovery.sh@99 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:16:19.893 08:02:25 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:19.893 08:02:25 -- common/autotest_common.sh@10 -- # set +x 00:16:19.893 08:02:25 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:19.893 08:02:25 -- host/discovery.sh@100 -- # sleep 1 00:16:20.458 [2024-07-13 08:02:26.005769] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:20.458 [2024-07-13 08:02:26.005831] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:20.458 [2024-07-13 08:02:26.005849] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:20.458 [2024-07-13 08:02:26.011818] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:20.458 [2024-07-13 08:02:26.067682] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:20.458 [2024-07-13 08:02:26.067710] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:21.023 08:02:26 -- host/discovery.sh@101 -- # get_subsystem_names 00:16:21.023 08:02:26 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:21.023 08:02:26 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:21.023 08:02:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.023 08:02:26 -- common/autotest_common.sh@10 -- # set +x 00:16:21.023 08:02:26 -- host/discovery.sh@59 -- # sort 00:16:21.024 08:02:26 -- host/discovery.sh@59 -- # xargs 00:16:21.024 08:02:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.024 08:02:26 -- host/discovery.sh@101 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.024 08:02:26 -- host/discovery.sh@102 -- # get_bdev_list 00:16:21.024 08:02:26 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:21.024 08:02:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.024 08:02:26 -- common/autotest_common.sh@10 -- # set +x 00:16:21.024 08:02:26 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:21.024 08:02:26 -- host/discovery.sh@55 -- # sort 00:16:21.024 08:02:26 -- host/discovery.sh@55 -- # xargs 00:16:21.024 08:02:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.024 08:02:26 -- host/discovery.sh@102 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:16:21.024 08:02:26 -- host/discovery.sh@103 -- # get_subsystem_paths nvme0 00:16:21.024 08:02:26 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:21.024 08:02:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.024 08:02:26 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:21.024 08:02:26 -- common/autotest_common.sh@10 -- # set +x 00:16:21.024 08:02:26 -- host/discovery.sh@63 -- # sort -n 00:16:21.024 08:02:26 -- host/discovery.sh@63 -- # xargs 00:16:21.024 08:02:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.024 08:02:26 -- host/discovery.sh@103 -- # [[ 4420 == \4\4\2\0 ]] 00:16:21.024 08:02:26 -- host/discovery.sh@104 -- # get_notification_count 00:16:21.024 08:02:26 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:16:21.024 08:02:26 -- host/discovery.sh@74 -- # jq '. | length' 00:16:21.024 08:02:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.024 08:02:26 -- common/autotest_common.sh@10 -- # set +x 00:16:21.024 08:02:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.024 08:02:26 -- host/discovery.sh@74 -- # notification_count=1 00:16:21.024 08:02:26 -- host/discovery.sh@75 -- # notify_id=1 00:16:21.024 08:02:26 -- host/discovery.sh@105 -- # [[ 1 == 1 ]] 00:16:21.024 08:02:26 -- host/discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:16:21.024 08:02:26 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:21.024 08:02:26 -- common/autotest_common.sh@10 -- # set +x 00:16:21.024 08:02:26 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:21.024 08:02:26 -- host/discovery.sh@109 -- # sleep 1 00:16:22.397 08:02:27 -- host/discovery.sh@110 -- # get_bdev_list 00:16:22.397 08:02:27 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:22.397 08:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:22.397 08:02:27 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:22.397 08:02:27 -- host/discovery.sh@55 -- # sort 00:16:22.397 08:02:27 -- common/autotest_common.sh@10 -- # set +x 00:16:22.397 08:02:27 -- host/discovery.sh@55 -- # xargs 00:16:22.397 08:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:22.397 08:02:27 -- host/discovery.sh@110 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:22.397 08:02:27 -- host/discovery.sh@111 -- # get_notification_count 00:16:22.397 08:02:27 -- host/discovery.sh@74 -- # jq '. | length' 00:16:22.397 08:02:27 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:16:22.397 08:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:22.397 08:02:27 -- common/autotest_common.sh@10 -- # set +x 00:16:22.397 08:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:22.397 08:02:27 -- host/discovery.sh@74 -- # notification_count=1 00:16:22.397 08:02:27 -- host/discovery.sh@75 -- # notify_id=2 00:16:22.397 08:02:27 -- host/discovery.sh@112 -- # [[ 1 == 1 ]] 00:16:22.397 08:02:27 -- host/discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:16:22.397 08:02:27 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:22.397 08:02:27 -- common/autotest_common.sh@10 -- # set +x 00:16:22.397 [2024-07-13 08:02:27.896945] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:16:22.397 [2024-07-13 08:02:27.897611] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:22.397 [2024-07-13 08:02:27.897638] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:22.397 08:02:27 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:22.397 08:02:27 -- host/discovery.sh@117 -- # sleep 1 00:16:22.397 [2024-07-13 08:02:27.903601] bdev_nvme.c:6683:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:16:22.397 [2024-07-13 08:02:27.960822] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:22.397 [2024-07-13 08:02:27.960843] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:22.397 [2024-07-13 08:02:27.960849] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:23.329 08:02:28 -- host/discovery.sh@118 -- # get_subsystem_names 00:16:23.329 08:02:28 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:23.330 08:02:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.330 08:02:28 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:23.330 08:02:28 -- host/discovery.sh@59 -- # sort 00:16:23.330 08:02:28 -- common/autotest_common.sh@10 -- # set +x 00:16:23.330 08:02:28 -- host/discovery.sh@59 -- # xargs 00:16:23.330 08:02:28 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.330 08:02:28 -- host/discovery.sh@118 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.330 08:02:28 -- host/discovery.sh@119 -- # get_bdev_list 00:16:23.330 08:02:28 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:23.330 08:02:28 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:23.330 08:02:28 -- host/discovery.sh@55 -- # sort 00:16:23.330 08:02:28 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.330 08:02:28 -- common/autotest_common.sh@10 -- # set +x 00:16:23.330 08:02:28 -- host/discovery.sh@55 -- # xargs 00:16:23.330 08:02:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.330 08:02:29 -- host/discovery.sh@119 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:23.330 08:02:29 -- host/discovery.sh@120 -- # get_subsystem_paths nvme0 00:16:23.330 08:02:29 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:23.330 08:02:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.330 08:02:29 -- common/autotest_common.sh@10 -- # set +x 00:16:23.330 08:02:29 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:23.330 08:02:29 -- host/discovery.sh@63 -- # sort -n 00:16:23.330 08:02:29 -- host/discovery.sh@63 -- # xargs 00:16:23.330 08:02:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.330 08:02:29 -- host/discovery.sh@120 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:16:23.330 08:02:29 -- host/discovery.sh@121 -- # get_notification_count 00:16:23.330 08:02:29 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:23.330 08:02:29 -- host/discovery.sh@74 -- # jq '. | length' 00:16:23.330 08:02:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.330 08:02:29 -- common/autotest_common.sh@10 -- # set +x 00:16:23.330 08:02:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.330 08:02:29 -- host/discovery.sh@74 -- # notification_count=0 00:16:23.330 08:02:29 -- host/discovery.sh@75 -- # notify_id=2 00:16:23.330 08:02:29 -- host/discovery.sh@122 -- # [[ 0 == 0 ]] 00:16:23.330 08:02:29 -- host/discovery.sh@126 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:23.330 08:02:29 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:23.330 08:02:29 -- common/autotest_common.sh@10 -- # set +x 00:16:23.330 [2024-07-13 08:02:29.139882] bdev_nvme.c:6741:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:16:23.330 [2024-07-13 08:02:29.139919] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:23.330 08:02:29 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:23.330 08:02:29 -- host/discovery.sh@127 -- # sleep 1 00:16:23.588 [2024-07-13 08:02:29.145463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.588 [2024-07-13 08:02:29.145507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.588 [2024-07-13 08:02:29.145538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.588 [2024-07-13 08:02:29.145548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.588 [2024-07-13 08:02:29.145558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.588 [2024-07-13 08:02:29.145567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.588 [2024-07-13 08:02:29.145591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:23.588 [2024-07-13 08:02:29.145600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:23.588 [2024-07-13 08:02:29.145609] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x244f0c0 is same with the state(5) to be set 00:16:23.588 [2024-07-13 08:02:29.145946] bdev_nvme.c:6546:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:16:23.588 [2024-07-13 08:02:29.145969] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:23.588 [2024-07-13 08:02:29.146040] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x244f0c0 (9): Bad file descriptor 00:16:24.522 08:02:30 -- host/discovery.sh@128 -- # get_subsystem_names 00:16:24.522 08:02:30 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:24.522 08:02:30 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:24.522 08:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.522 08:02:30 -- common/autotest_common.sh@10 -- # set +x 00:16:24.522 08:02:30 -- host/discovery.sh@59 -- # xargs 00:16:24.522 08:02:30 -- host/discovery.sh@59 -- # sort 00:16:24.522 08:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.522 08:02:30 -- host/discovery.sh@128 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.522 08:02:30 -- host/discovery.sh@129 -- # get_bdev_list 00:16:24.522 08:02:30 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:24.522 08:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.522 08:02:30 -- host/discovery.sh@55 -- # sort 00:16:24.522 08:02:30 -- common/autotest_common.sh@10 -- # set +x 00:16:24.522 08:02:30 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:24.522 08:02:30 -- host/discovery.sh@55 -- # xargs 00:16:24.522 08:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.522 08:02:30 -- host/discovery.sh@129 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:24.522 08:02:30 -- host/discovery.sh@130 -- # get_subsystem_paths nvme0 00:16:24.522 08:02:30 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:16:24.522 08:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.522 08:02:30 -- common/autotest_common.sh@10 -- # set +x 00:16:24.522 08:02:30 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:16:24.522 08:02:30 -- host/discovery.sh@63 -- # sort -n 00:16:24.522 08:02:30 -- host/discovery.sh@63 -- # xargs 00:16:24.522 08:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.522 08:02:30 -- host/discovery.sh@130 -- # [[ 4421 == \4\4\2\1 ]] 00:16:24.522 08:02:30 -- host/discovery.sh@131 -- # get_notification_count 00:16:24.522 08:02:30 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:24.522 08:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.522 08:02:30 -- host/discovery.sh@74 -- # jq '. | length' 00:16:24.522 08:02:30 -- common/autotest_common.sh@10 -- # set +x 00:16:24.522 08:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.780 08:02:30 -- host/discovery.sh@74 -- # notification_count=0 00:16:24.780 08:02:30 -- host/discovery.sh@75 -- # notify_id=2 00:16:24.780 08:02:30 -- host/discovery.sh@132 -- # [[ 0 == 0 ]] 00:16:24.780 08:02:30 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:16:24.780 08:02:30 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:24.780 08:02:30 -- common/autotest_common.sh@10 -- # set +x 00:16:24.780 08:02:30 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:24.780 08:02:30 -- host/discovery.sh@135 -- # sleep 1 00:16:25.712 08:02:31 -- host/discovery.sh@136 -- # get_subsystem_names 00:16:25.712 08:02:31 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:16:25.712 08:02:31 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:16:25.712 08:02:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:25.712 08:02:31 -- host/discovery.sh@59 -- # sort 00:16:25.712 08:02:31 -- common/autotest_common.sh@10 -- # set +x 00:16:25.712 08:02:31 -- host/discovery.sh@59 -- # xargs 00:16:25.712 08:02:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:25.712 08:02:31 -- host/discovery.sh@136 -- # [[ '' == '' ]] 00:16:25.712 08:02:31 -- host/discovery.sh@137 -- # get_bdev_list 00:16:25.712 08:02:31 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:25.712 08:02:31 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:25.712 08:02:31 -- host/discovery.sh@55 -- # sort 00:16:25.712 08:02:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:25.712 08:02:31 -- host/discovery.sh@55 -- # xargs 00:16:25.712 08:02:31 -- common/autotest_common.sh@10 -- # set +x 00:16:25.712 08:02:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:25.712 08:02:31 -- host/discovery.sh@137 -- # [[ '' == '' ]] 00:16:25.712 08:02:31 -- host/discovery.sh@138 -- # get_notification_count 00:16:25.713 08:02:31 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:16:25.713 08:02:31 -- host/discovery.sh@74 -- # jq '. | length' 00:16:25.713 08:02:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:25.713 08:02:31 -- common/autotest_common.sh@10 -- # set +x 00:16:25.971 08:02:31 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:25.971 08:02:31 -- host/discovery.sh@74 -- # notification_count=2 00:16:25.971 08:02:31 -- host/discovery.sh@75 -- # notify_id=4 00:16:25.971 08:02:31 -- host/discovery.sh@139 -- # [[ 2 == 2 ]] 00:16:25.971 08:02:31 -- host/discovery.sh@142 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:25.971 08:02:31 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:25.971 08:02:31 -- common/autotest_common.sh@10 -- # set +x 00:16:26.905 [2024-07-13 08:02:32.588959] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:26.905 [2024-07-13 08:02:32.588990] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:26.905 [2024-07-13 08:02:32.589008] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:26.905 [2024-07-13 08:02:32.595003] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:16:26.905 [2024-07-13 08:02:32.654621] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:26.905 [2024-07-13 08:02:32.654663] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:16:26.905 08:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:26.905 08:02:32 -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:26.905 08:02:32 -- common/autotest_common.sh@640 -- # local es=0 00:16:26.906 08:02:32 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:26.906 08:02:32 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:26.906 08:02:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:26.906 08:02:32 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:26.906 08:02:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:26.906 08:02:32 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:26.906 08:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.906 08:02:32 -- common/autotest_common.sh@10 -- # set +x 00:16:26.906 request: 00:16:26.906 { 00:16:26.906 "name": "nvme", 00:16:26.906 "trtype": "tcp", 00:16:26.906 "traddr": "10.0.0.2", 00:16:26.906 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:26.906 "adrfam": "ipv4", 00:16:26.906 "trsvcid": "8009", 00:16:26.906 "wait_for_attach": true, 00:16:26.906 "method": "bdev_nvme_start_discovery", 00:16:26.906 "req_id": 1 00:16:26.906 } 00:16:26.906 Got JSON-RPC error response 00:16:26.906 response: 00:16:26.906 { 00:16:26.906 "code": -17, 00:16:26.906 "message": "File exists" 00:16:26.906 } 00:16:26.906 08:02:32 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:26.906 08:02:32 -- common/autotest_common.sh@643 -- # es=1 00:16:26.906 08:02:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:26.906 08:02:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:26.906 08:02:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:26.906 08:02:32 -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:16:26.906 08:02:32 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:26.906 08:02:32 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:26.906 08:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:26.906 08:02:32 -- host/discovery.sh@67 -- # sort 00:16:26.906 08:02:32 -- common/autotest_common.sh@10 -- # set +x 00:16:26.906 08:02:32 -- host/discovery.sh@67 -- # xargs 00:16:26.906 08:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.164 08:02:32 -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:16:27.164 08:02:32 -- host/discovery.sh@147 -- # get_bdev_list 00:16:27.164 08:02:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:27.164 08:02:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:27.164 08:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.164 08:02:32 -- host/discovery.sh@55 -- # sort 00:16:27.165 08:02:32 -- common/autotest_common.sh@10 -- # set +x 00:16:27.165 08:02:32 -- host/discovery.sh@55 -- # xargs 00:16:27.165 08:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.165 08:02:32 -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:27.165 08:02:32 -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:27.165 08:02:32 -- common/autotest_common.sh@640 -- # local es=0 00:16:27.165 08:02:32 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:27.165 08:02:32 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:27.165 08:02:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:27.165 08:02:32 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:27.165 08:02:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:27.165 08:02:32 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:16:27.165 08:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.165 08:02:32 -- common/autotest_common.sh@10 -- # set +x 00:16:27.165 request: 00:16:27.165 { 00:16:27.165 "name": "nvme_second", 00:16:27.165 "trtype": "tcp", 00:16:27.165 "traddr": "10.0.0.2", 00:16:27.165 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:27.165 "adrfam": "ipv4", 00:16:27.165 "trsvcid": "8009", 00:16:27.165 "wait_for_attach": true, 00:16:27.165 "method": "bdev_nvme_start_discovery", 00:16:27.165 "req_id": 1 00:16:27.165 } 00:16:27.165 Got JSON-RPC error response 00:16:27.165 response: 00:16:27.165 { 00:16:27.165 "code": -17, 00:16:27.165 "message": "File exists" 00:16:27.165 } 00:16:27.165 08:02:32 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:27.165 08:02:32 -- common/autotest_common.sh@643 -- # es=1 00:16:27.165 08:02:32 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:27.165 08:02:32 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:27.165 08:02:32 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:27.165 08:02:32 -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:16:27.165 08:02:32 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:27.165 08:02:32 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:27.165 08:02:32 -- host/discovery.sh@67 -- # xargs 00:16:27.165 08:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.165 08:02:32 -- host/discovery.sh@67 -- # sort 00:16:27.165 08:02:32 -- common/autotest_common.sh@10 -- # set +x 00:16:27.165 08:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.165 08:02:32 -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:16:27.165 08:02:32 -- host/discovery.sh@153 -- # get_bdev_list 00:16:27.165 08:02:32 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:27.165 08:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.165 08:02:32 -- common/autotest_common.sh@10 -- # set +x 00:16:27.165 08:02:32 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:16:27.165 08:02:32 -- host/discovery.sh@55 -- # sort 00:16:27.165 08:02:32 -- host/discovery.sh@55 -- # xargs 00:16:27.165 08:02:32 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:27.165 08:02:32 -- host/discovery.sh@153 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:16:27.165 08:02:32 -- host/discovery.sh@156 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:27.165 08:02:32 -- common/autotest_common.sh@640 -- # local es=0 00:16:27.165 08:02:32 -- common/autotest_common.sh@642 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:27.165 08:02:32 -- common/autotest_common.sh@628 -- # local arg=rpc_cmd 00:16:27.165 08:02:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:27.165 08:02:32 -- common/autotest_common.sh@632 -- # type -t rpc_cmd 00:16:27.165 08:02:32 -- common/autotest_common.sh@632 -- # case "$(type -t "$arg")" in 00:16:27.165 08:02:32 -- common/autotest_common.sh@643 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:16:27.165 08:02:32 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:27.165 08:02:32 -- common/autotest_common.sh@10 -- # set +x 00:16:28.539 [2024-07-13 08:02:33.940389] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.539 [2024-07-13 08:02:33.940532] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.539 [2024-07-13 08:02:33.940610] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:28.539 [2024-07-13 08:02:33.940641] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24675a0 with addr=10.0.0.2, port=8010 00:16:28.539 [2024-07-13 08:02:33.940675] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:28.539 [2024-07-13 08:02:33.940685] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:28.539 [2024-07-13 08:02:33.940696] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:29.495 [2024-07-13 08:02:34.940393] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:16:29.495 [2024-07-13 08:02:34.940528] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:29.495 [2024-07-13 08:02:34.940571] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:16:29.495 [2024-07-13 08:02:34.940588] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x24675a0 with addr=10.0.0.2, port=8010 00:16:29.495 [2024-07-13 08:02:34.940606] nvme_tcp.c:2596:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:16:29.495 [2024-07-13 08:02:34.940615] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:16:29.495 [2024-07-13 08:02:34.940625] bdev_nvme.c:6821:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:16:30.455 [2024-07-13 08:02:35.940226] bdev_nvme.c:6802:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:16:30.455 request: 00:16:30.455 { 00:16:30.455 "name": "nvme_second", 00:16:30.455 "trtype": "tcp", 00:16:30.455 "traddr": "10.0.0.2", 00:16:30.455 "hostnqn": "nqn.2021-12.io.spdk:test", 00:16:30.455 "adrfam": "ipv4", 00:16:30.455 "trsvcid": "8010", 00:16:30.455 "attach_timeout_ms": 3000, 00:16:30.455 "method": "bdev_nvme_start_discovery", 00:16:30.455 "req_id": 1 00:16:30.455 } 00:16:30.455 Got JSON-RPC error response 00:16:30.455 response: 00:16:30.455 { 00:16:30.455 "code": -110, 00:16:30.455 "message": "Connection timed out" 00:16:30.455 } 00:16:30.455 08:02:35 -- common/autotest_common.sh@579 -- # [[ 1 == 0 ]] 00:16:30.455 08:02:35 -- common/autotest_common.sh@643 -- # es=1 00:16:30.455 08:02:35 -- common/autotest_common.sh@651 -- # (( es > 128 )) 00:16:30.455 08:02:35 -- common/autotest_common.sh@662 -- # [[ -n '' ]] 00:16:30.455 08:02:35 -- common/autotest_common.sh@667 -- # (( !es == 0 )) 00:16:30.455 08:02:35 -- host/discovery.sh@158 -- # get_discovery_ctrlrs 00:16:30.455 08:02:35 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:16:30.455 08:02:35 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:16:30.455 08:02:35 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:30.455 08:02:35 -- host/discovery.sh@67 -- # sort 00:16:30.455 08:02:35 -- host/discovery.sh@67 -- # xargs 00:16:30.455 08:02:35 -- common/autotest_common.sh@10 -- # set +x 00:16:30.455 08:02:35 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:30.455 08:02:36 -- host/discovery.sh@158 -- # [[ nvme == \n\v\m\e ]] 00:16:30.455 08:02:36 -- host/discovery.sh@160 -- # trap - SIGINT SIGTERM EXIT 00:16:30.455 08:02:36 -- host/discovery.sh@162 -- # kill 78376 00:16:30.455 08:02:36 -- host/discovery.sh@163 -- # nvmftestfini 00:16:30.455 08:02:36 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:30.455 08:02:36 -- nvmf/common.sh@116 -- # sync 00:16:30.455 08:02:36 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:30.455 08:02:36 -- nvmf/common.sh@119 -- # set +e 00:16:30.455 08:02:36 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:30.455 08:02:36 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:30.455 rmmod nvme_tcp 00:16:30.455 rmmod nvme_fabrics 00:16:30.455 rmmod nvme_keyring 00:16:30.455 08:02:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:30.456 08:02:36 -- nvmf/common.sh@123 -- # set -e 00:16:30.456 08:02:36 -- nvmf/common.sh@124 -- # return 0 00:16:30.456 08:02:36 -- nvmf/common.sh@477 -- # '[' -n 78350 ']' 00:16:30.456 08:02:36 -- nvmf/common.sh@478 -- # killprocess 78350 00:16:30.456 08:02:36 -- common/autotest_common.sh@926 -- # '[' -z 78350 ']' 00:16:30.456 08:02:36 -- common/autotest_common.sh@930 -- # kill -0 78350 00:16:30.456 08:02:36 -- common/autotest_common.sh@931 -- # uname 00:16:30.456 08:02:36 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:30.456 08:02:36 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78350 00:16:30.456 killing process with pid 78350 00:16:30.456 08:02:36 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:30.456 08:02:36 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:30.456 08:02:36 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78350' 00:16:30.456 08:02:36 -- common/autotest_common.sh@945 -- # kill 78350 00:16:30.456 08:02:36 -- common/autotest_common.sh@950 -- # wait 78350 00:16:30.716 08:02:36 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:30.716 08:02:36 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:30.716 08:02:36 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:30.716 08:02:36 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:30.716 08:02:36 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:30.716 08:02:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.716 08:02:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.716 08:02:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.716 08:02:36 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:30.716 00:16:30.716 real 0m13.895s 00:16:30.716 user 0m26.772s 00:16:30.716 sys 0m2.208s 00:16:30.716 08:02:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:30.716 08:02:36 -- common/autotest_common.sh@10 -- # set +x 00:16:30.716 ************************************ 00:16:30.716 END TEST nvmf_discovery 00:16:30.716 ************************************ 00:16:30.716 08:02:36 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:30.716 08:02:36 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:30.716 08:02:36 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:30.716 08:02:36 -- common/autotest_common.sh@10 -- # set +x 00:16:30.716 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:16:30.716 ************************************ 00:16:30.716 START TEST nvmf_discovery_remove_ifc 00:16:30.716 ************************************ 00:16:30.716 08:02:36 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:16:30.716 * Looking for test storage... 00:16:30.716 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:30.716 08:02:36 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:30.716 08:02:36 -- nvmf/common.sh@7 -- # uname -s 00:16:30.716 08:02:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.716 08:02:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.716 08:02:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.716 08:02:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.716 08:02:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.716 08:02:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.716 08:02:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.716 08:02:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.716 08:02:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.716 08:02:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.716 08:02:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:16:30.716 08:02:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:16:30.716 08:02:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.716 08:02:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.716 08:02:36 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:30.716 08:02:36 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.716 08:02:36 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.716 08:02:36 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.716 08:02:36 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.716 08:02:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.716 08:02:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.716 08:02:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.716 08:02:36 -- paths/export.sh@5 -- # export PATH 00:16:30.716 08:02:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.716 08:02:36 -- nvmf/common.sh@46 -- # : 0 00:16:30.716 08:02:36 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:30.716 08:02:36 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:30.716 08:02:36 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:30.716 08:02:36 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.716 08:02:36 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.716 08:02:36 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:30.716 08:02:36 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:30.716 08:02:36 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:30.716 08:02:36 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:16:30.716 08:02:36 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:16:30.716 08:02:36 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:16:30.716 08:02:36 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:30.716 08:02:36 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:16:30.716 08:02:36 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:16:30.716 08:02:36 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:16:30.716 08:02:36 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:30.716 08:02:36 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.716 08:02:36 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:30.716 08:02:36 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:30.716 08:02:36 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:30.716 08:02:36 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.716 08:02:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:30.716 08:02:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.976 08:02:36 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:30.976 08:02:36 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:30.976 08:02:36 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:30.976 08:02:36 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:30.976 08:02:36 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:30.976 08:02:36 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:30.976 08:02:36 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.976 08:02:36 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:30.976 08:02:36 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:30.976 08:02:36 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:30.976 08:02:36 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:30.976 08:02:36 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:30.976 08:02:36 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:30.976 08:02:36 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.976 08:02:36 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:30.976 08:02:36 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:30.976 08:02:36 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:30.976 08:02:36 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:30.976 08:02:36 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:30.976 08:02:36 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:30.976 Cannot find device "nvmf_tgt_br" 00:16:30.976 08:02:36 -- nvmf/common.sh@154 -- # true 00:16:30.976 08:02:36 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.976 Cannot find device "nvmf_tgt_br2" 00:16:30.976 08:02:36 -- nvmf/common.sh@155 -- # true 00:16:30.976 08:02:36 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:30.976 08:02:36 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:30.976 Cannot find device "nvmf_tgt_br" 00:16:30.976 08:02:36 -- nvmf/common.sh@157 -- # true 00:16:30.976 08:02:36 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:30.976 Cannot find device "nvmf_tgt_br2" 00:16:30.976 08:02:36 -- nvmf/common.sh@158 -- # true 00:16:30.976 08:02:36 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:30.976 08:02:36 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:30.976 08:02:36 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.976 08:02:36 -- nvmf/common.sh@161 -- # true 00:16:30.976 08:02:36 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.976 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.976 08:02:36 -- nvmf/common.sh@162 -- # true 00:16:30.976 08:02:36 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:30.976 08:02:36 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:30.976 08:02:36 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:30.976 08:02:36 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:30.976 08:02:36 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:30.976 08:02:36 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:30.976 08:02:36 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:30.976 08:02:36 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:30.976 08:02:36 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:30.976 08:02:36 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:30.976 08:02:36 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:30.976 08:02:36 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:30.976 08:02:36 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:30.976 08:02:36 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.976 08:02:36 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:30.976 08:02:36 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:30.976 08:02:36 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:30.976 08:02:36 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:30.976 08:02:36 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:31.236 08:02:36 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:31.236 08:02:36 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:31.236 08:02:36 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:31.236 08:02:36 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:31.236 08:02:36 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:31.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:16:31.236 00:16:31.236 --- 10.0.0.2 ping statistics --- 00:16:31.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.236 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:16:31.236 08:02:36 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:31.236 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:31.236 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:16:31.236 00:16:31.236 --- 10.0.0.3 ping statistics --- 00:16:31.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.236 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:16:31.236 08:02:36 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:31.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.017 ms 00:16:31.236 00:16:31.236 --- 10.0.0.1 ping statistics --- 00:16:31.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.236 rtt min/avg/max/mdev = 0.017/0.017/0.017/0.000 ms 00:16:31.236 08:02:36 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.236 08:02:36 -- nvmf/common.sh@421 -- # return 0 00:16:31.236 08:02:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:31.236 08:02:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.236 08:02:36 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:31.236 08:02:36 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:31.236 08:02:36 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.236 08:02:36 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:31.236 08:02:36 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:31.236 08:02:36 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:16:31.236 08:02:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:31.236 08:02:36 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:31.236 08:02:36 -- common/autotest_common.sh@10 -- # set +x 00:16:31.236 08:02:36 -- nvmf/common.sh@469 -- # nvmfpid=78798 00:16:31.236 08:02:36 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:31.236 08:02:36 -- nvmf/common.sh@470 -- # waitforlisten 78798 00:16:31.236 08:02:36 -- common/autotest_common.sh@819 -- # '[' -z 78798 ']' 00:16:31.236 08:02:36 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.236 08:02:36 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:31.236 08:02:36 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.236 08:02:36 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:31.236 08:02:36 -- common/autotest_common.sh@10 -- # set +x 00:16:31.236 [2024-07-13 08:02:36.924337] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:31.236 [2024-07-13 08:02:36.924434] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.496 [2024-07-13 08:02:37.067331] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.496 [2024-07-13 08:02:37.106809] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:31.496 [2024-07-13 08:02:37.107081] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.496 [2024-07-13 08:02:37.107097] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.496 [2024-07-13 08:02:37.107106] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.496 [2024-07-13 08:02:37.107137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:32.435 08:02:37 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:32.435 08:02:37 -- common/autotest_common.sh@852 -- # return 0 00:16:32.435 08:02:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:32.435 08:02:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:32.435 08:02:37 -- common/autotest_common.sh@10 -- # set +x 00:16:32.435 08:02:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:32.435 08:02:37 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:16:32.435 08:02:37 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.435 08:02:37 -- common/autotest_common.sh@10 -- # set +x 00:16:32.435 [2024-07-13 08:02:38.000419] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:32.435 [2024-07-13 08:02:38.008558] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:16:32.435 null0 00:16:32.435 [2024-07-13 08:02:38.040486] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:32.435 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:16:32.435 08:02:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.435 08:02:38 -- host/discovery_remove_ifc.sh@59 -- # hostpid=78826 00:16:32.435 08:02:38 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 78826 /tmp/host.sock 00:16:32.435 08:02:38 -- common/autotest_common.sh@819 -- # '[' -z 78826 ']' 00:16:32.435 08:02:38 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:16:32.435 08:02:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/tmp/host.sock 00:16:32.435 08:02:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:32.435 08:02:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:16:32.435 08:02:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:32.435 08:02:38 -- common/autotest_common.sh@10 -- # set +x 00:16:32.435 [2024-07-13 08:02:38.110427] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:32.435 [2024-07-13 08:02:38.110495] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78826 ] 00:16:32.694 [2024-07-13 08:02:38.253149] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:32.694 [2024-07-13 08:02:38.295654] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:32.694 [2024-07-13 08:02:38.296141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:32.694 08:02:38 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:32.694 08:02:38 -- common/autotest_common.sh@852 -- # return 0 00:16:32.694 08:02:38 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:32.694 08:02:38 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:16:32.694 08:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.694 08:02:38 -- common/autotest_common.sh@10 -- # set +x 00:16:32.694 08:02:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.694 08:02:38 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:16:32.694 08:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.694 08:02:38 -- common/autotest_common.sh@10 -- # set +x 00:16:32.694 08:02:38 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:32.694 08:02:38 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:16:32.694 08:02:38 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:32.694 08:02:38 -- common/autotest_common.sh@10 -- # set +x 00:16:33.633 [2024-07-13 08:02:39.443673] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:33.633 [2024-07-13 08:02:39.443919] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:33.633 [2024-07-13 08:02:39.444013] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:33.893 [2024-07-13 08:02:39.449746] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:16:33.893 [2024-07-13 08:02:39.506153] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:33.893 [2024-07-13 08:02:39.506330] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:33.893 [2024-07-13 08:02:39.506403] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:33.893 [2024-07-13 08:02:39.506578] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:16:33.893 [2024-07-13 08:02:39.506789] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:33.893 08:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:33.893 [2024-07-13 08:02:39.512302] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb6da40 was disconnected and freed. delete nvme_qpair. 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:33.893 08:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:33.893 08:02:39 -- common/autotest_common.sh@10 -- # set +x 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:33.893 08:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:33.893 08:02:39 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:33.893 08:02:39 -- common/autotest_common.sh@10 -- # set +x 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:33.893 08:02:39 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:33.893 08:02:39 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:35.273 08:02:40 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:35.273 08:02:40 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:35.273 08:02:40 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:35.273 08:02:40 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:35.273 08:02:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:35.273 08:02:40 -- common/autotest_common.sh@10 -- # set +x 00:16:35.273 08:02:40 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:35.273 08:02:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:35.273 08:02:40 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:35.273 08:02:40 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:36.210 08:02:41 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:36.210 08:02:41 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:36.210 08:02:41 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:36.210 08:02:41 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:36.210 08:02:41 -- common/autotest_common.sh@10 -- # set +x 00:16:36.210 08:02:41 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:36.210 08:02:41 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:36.210 08:02:41 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:36.210 08:02:41 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:36.210 08:02:41 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:37.167 08:02:42 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:37.167 08:02:42 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:37.167 08:02:42 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:37.167 08:02:42 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:37.167 08:02:42 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:37.167 08:02:42 -- common/autotest_common.sh@10 -- # set +x 00:16:37.167 08:02:42 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:37.167 08:02:42 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:37.167 08:02:42 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:37.167 08:02:42 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:38.103 08:02:43 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:38.103 08:02:43 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:38.103 08:02:43 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:38.103 08:02:43 -- common/autotest_common.sh@10 -- # set +x 00:16:38.103 08:02:43 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:38.103 08:02:43 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:38.103 08:02:43 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:38.103 08:02:43 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:38.103 08:02:43 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:38.103 08:02:43 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:39.481 08:02:44 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:39.481 08:02:44 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:39.481 08:02:44 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:39.481 08:02:44 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:39.481 08:02:44 -- common/autotest_common.sh@10 -- # set +x 00:16:39.481 08:02:44 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:39.481 08:02:44 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:39.481 08:02:44 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:39.481 [2024-07-13 08:02:44.944070] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:16:39.481 [2024-07-13 08:02:44.944315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.481 [2024-07-13 08:02:44.944438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.481 [2024-07-13 08:02:44.944458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.481 [2024-07-13 08:02:44.944469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.481 [2024-07-13 08:02:44.944481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.481 [2024-07-13 08:02:44.944491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.481 [2024-07-13 08:02:44.944502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.481 [2024-07-13 08:02:44.944512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.481 [2024-07-13 08:02:44.944523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:39.481 [2024-07-13 08:02:44.944533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:39.481 [2024-07-13 08:02:44.944544] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bfc0 is same with the state(5) to be set 00:16:39.481 08:02:44 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:39.481 08:02:44 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:39.481 [2024-07-13 08:02:44.954068] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1bfc0 (9): Bad file descriptor 00:16:39.481 [2024-07-13 08:02:44.964086] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:40.415 08:02:45 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:40.416 08:02:45 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:40.416 08:02:45 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:40.416 08:02:45 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:40.416 08:02:45 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:40.416 08:02:45 -- common/autotest_common.sh@10 -- # set +x 00:16:40.416 08:02:45 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:40.416 [2024-07-13 08:02:45.973897] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:16:41.348 [2024-07-13 08:02:46.996909] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:42.279 [2024-07-13 08:02:48.020943] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:16:42.279 [2024-07-13 08:02:48.021108] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xb1bfc0 with addr=10.0.0.2, port=4420 00:16:42.279 [2024-07-13 08:02:48.021147] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb1bfc0 is same with the state(5) to be set 00:16:42.279 [2024-07-13 08:02:48.021205] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:16:42.279 [2024-07-13 08:02:48.021231] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:16:42.279 [2024-07-13 08:02:48.021251] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:16:42.279 [2024-07-13 08:02:48.021273] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:16:42.279 [2024-07-13 08:02:48.022149] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1bfc0 (9): Bad file descriptor 00:16:42.279 [2024-07-13 08:02:48.022218] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:16:42.279 [2024-07-13 08:02:48.022274] bdev_nvme.c:6510:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:16:42.279 [2024-07-13 08:02:48.022346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.279 [2024-07-13 08:02:48.022378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.279 [2024-07-13 08:02:48.022407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.279 [2024-07-13 08:02:48.022430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.279 [2024-07-13 08:02:48.022453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.279 [2024-07-13 08:02:48.022476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.279 [2024-07-13 08:02:48.022499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.279 [2024-07-13 08:02:48.022528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.279 [2024-07-13 08:02:48.022552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:16:42.279 [2024-07-13 08:02:48.022574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:42.279 [2024-07-13 08:02:48.022596] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:16:42.279 [2024-07-13 08:02:48.022629] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb1bc60 (9): Bad file descriptor 00:16:42.279 [2024-07-13 08:02:48.023249] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:16:42.280 [2024-07-13 08:02:48.023285] nvme_ctrlr.c:1136:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:16:42.280 08:02:48 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:42.280 08:02:48 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:16:42.280 08:02:48 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:43.650 08:02:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:43.650 08:02:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:43.650 08:02:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:43.650 08:02:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:43.650 08:02:49 -- common/autotest_common.sh@10 -- # set +x 00:16:43.650 08:02:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:43.650 08:02:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:43.650 08:02:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:43.650 08:02:49 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:16:43.650 08:02:49 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:43.650 08:02:49 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:43.650 08:02:49 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:16:43.650 08:02:49 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:43.650 08:02:49 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:43.650 08:02:49 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:43.650 08:02:49 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:43.650 08:02:49 -- common/autotest_common.sh@10 -- # set +x 00:16:43.650 08:02:49 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:43.650 08:02:49 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:43.650 08:02:49 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:43.650 08:02:49 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:16:43.650 08:02:49 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:16:44.585 [2024-07-13 08:02:50.030927] bdev_nvme.c:6759:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:16:44.585 [2024-07-13 08:02:50.030978] bdev_nvme.c:6839:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:16:44.585 [2024-07-13 08:02:50.031000] bdev_nvme.c:6722:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:16:44.585 [2024-07-13 08:02:50.036970] bdev_nvme.c:6688:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:16:44.585 [2024-07-13 08:02:50.092617] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:16:44.585 [2024-07-13 08:02:50.092663] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:16:44.585 [2024-07-13 08:02:50.092686] bdev_nvme.c:7548:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:16:44.585 [2024-07-13 08:02:50.092702] bdev_nvme.c:6578:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:16:44.585 [2024-07-13 08:02:50.092711] bdev_nvme.c:6537:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:16:44.585 [2024-07-13 08:02:50.099554] bdev_nvme.c:1595:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0xb22fe0 was disconnected and freed. delete nvme_qpair. 00:16:44.585 08:02:50 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:16:44.585 08:02:50 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:16:44.585 08:02:50 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:16:44.585 08:02:50 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:44.585 08:02:50 -- common/autotest_common.sh@10 -- # set +x 00:16:44.585 08:02:50 -- host/discovery_remove_ifc.sh@29 -- # sort 00:16:44.585 08:02:50 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:16:44.585 08:02:50 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:44.585 08:02:50 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:16:44.585 08:02:50 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:16:44.585 08:02:50 -- host/discovery_remove_ifc.sh@90 -- # killprocess 78826 00:16:44.585 08:02:50 -- common/autotest_common.sh@926 -- # '[' -z 78826 ']' 00:16:44.585 08:02:50 -- common/autotest_common.sh@930 -- # kill -0 78826 00:16:44.585 08:02:50 -- common/autotest_common.sh@931 -- # uname 00:16:44.585 08:02:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:44.585 08:02:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78826 00:16:44.585 killing process with pid 78826 00:16:44.585 08:02:50 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:16:44.585 08:02:50 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:16:44.585 08:02:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78826' 00:16:44.585 08:02:50 -- common/autotest_common.sh@945 -- # kill 78826 00:16:44.585 08:02:50 -- common/autotest_common.sh@950 -- # wait 78826 00:16:44.859 08:02:50 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:16:44.859 08:02:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:44.859 08:02:50 -- nvmf/common.sh@116 -- # sync 00:16:44.859 08:02:50 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:16:44.859 08:02:50 -- nvmf/common.sh@119 -- # set +e 00:16:44.859 08:02:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:44.859 08:02:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:16:44.859 rmmod nvme_tcp 00:16:44.859 rmmod nvme_fabrics 00:16:44.859 rmmod nvme_keyring 00:16:44.859 08:02:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:44.859 08:02:50 -- nvmf/common.sh@123 -- # set -e 00:16:44.859 08:02:50 -- nvmf/common.sh@124 -- # return 0 00:16:44.859 08:02:50 -- nvmf/common.sh@477 -- # '[' -n 78798 ']' 00:16:44.859 08:02:50 -- nvmf/common.sh@478 -- # killprocess 78798 00:16:44.859 08:02:50 -- common/autotest_common.sh@926 -- # '[' -z 78798 ']' 00:16:44.859 08:02:50 -- common/autotest_common.sh@930 -- # kill -0 78798 00:16:44.859 08:02:50 -- common/autotest_common.sh@931 -- # uname 00:16:44.859 08:02:50 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:44.859 08:02:50 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 78798 00:16:44.859 killing process with pid 78798 00:16:44.859 08:02:50 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:44.859 08:02:50 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:44.859 08:02:50 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 78798' 00:16:44.859 08:02:50 -- common/autotest_common.sh@945 -- # kill 78798 00:16:44.859 08:02:50 -- common/autotest_common.sh@950 -- # wait 78798 00:16:45.118 08:02:50 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:45.118 08:02:50 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:16:45.118 08:02:50 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:16:45.118 08:02:50 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:45.118 08:02:50 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:16:45.118 08:02:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.118 08:02:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.118 08:02:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.118 08:02:50 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:16:45.118 ************************************ 00:16:45.118 END TEST nvmf_discovery_remove_ifc 00:16:45.118 ************************************ 00:16:45.118 00:16:45.118 real 0m14.381s 00:16:45.118 user 0m22.665s 00:16:45.118 sys 0m2.505s 00:16:45.118 08:02:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:45.118 08:02:50 -- common/autotest_common.sh@10 -- # set +x 00:16:45.118 08:02:50 -- nvmf/nvmf.sh@106 -- # [[ tcp == \t\c\p ]] 00:16:45.118 08:02:50 -- nvmf/nvmf.sh@107 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:45.118 08:02:50 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:16:45.118 08:02:50 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:45.118 08:02:50 -- common/autotest_common.sh@10 -- # set +x 00:16:45.118 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:16:45.118 ************************************ 00:16:45.118 START TEST nvmf_digest 00:16:45.118 ************************************ 00:16:45.118 08:02:50 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:16:45.118 * Looking for test storage... 00:16:45.377 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:16:45.377 08:02:50 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:45.377 08:02:50 -- nvmf/common.sh@7 -- # uname -s 00:16:45.377 08:02:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:45.377 08:02:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:45.377 08:02:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:45.377 08:02:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:45.377 08:02:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:45.377 08:02:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:45.377 08:02:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:45.377 08:02:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:45.377 08:02:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:45.377 08:02:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:45.377 08:02:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:16:45.377 08:02:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:16:45.377 08:02:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:45.377 08:02:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:45.377 08:02:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:45.377 08:02:50 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:45.377 08:02:50 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:45.377 08:02:50 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:45.377 08:02:50 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:45.377 08:02:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.378 08:02:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.378 08:02:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.378 08:02:50 -- paths/export.sh@5 -- # export PATH 00:16:45.378 08:02:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:45.378 08:02:50 -- nvmf/common.sh@46 -- # : 0 00:16:45.378 08:02:50 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:45.378 08:02:50 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:45.378 08:02:50 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:45.378 08:02:50 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:45.378 08:02:50 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:45.378 08:02:50 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:45.378 08:02:50 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:45.378 08:02:50 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:45.378 08:02:50 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:45.378 08:02:50 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:16:45.378 08:02:50 -- host/digest.sh@16 -- # runtime=2 00:16:45.378 08:02:50 -- host/digest.sh@130 -- # [[ tcp != \t\c\p ]] 00:16:45.378 08:02:50 -- host/digest.sh@132 -- # nvmftestinit 00:16:45.378 08:02:50 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:16:45.378 08:02:50 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:45.378 08:02:50 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:45.378 08:02:50 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:45.378 08:02:50 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:45.378 08:02:50 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:45.378 08:02:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:45.378 08:02:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:45.378 08:02:50 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:16:45.378 08:02:50 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:16:45.378 08:02:50 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:16:45.378 08:02:50 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:16:45.378 08:02:50 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:16:45.378 08:02:50 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:16:45.378 08:02:50 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:45.378 08:02:50 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:45.378 08:02:50 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:16:45.378 08:02:50 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:16:45.378 08:02:50 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:45.378 08:02:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:45.378 08:02:50 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:45.378 08:02:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:45.378 08:02:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:45.378 08:02:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:45.378 08:02:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:45.378 08:02:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:45.378 08:02:50 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:16:45.378 08:02:50 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:16:45.378 Cannot find device "nvmf_tgt_br" 00:16:45.378 08:02:51 -- nvmf/common.sh@154 -- # true 00:16:45.378 08:02:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:16:45.378 Cannot find device "nvmf_tgt_br2" 00:16:45.378 08:02:51 -- nvmf/common.sh@155 -- # true 00:16:45.378 08:02:51 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:16:45.378 08:02:51 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:16:45.378 Cannot find device "nvmf_tgt_br" 00:16:45.378 08:02:51 -- nvmf/common.sh@157 -- # true 00:16:45.378 08:02:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:16:45.378 Cannot find device "nvmf_tgt_br2" 00:16:45.378 08:02:51 -- nvmf/common.sh@158 -- # true 00:16:45.378 08:02:51 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:16:45.378 08:02:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:16:45.378 08:02:51 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:45.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.378 08:02:51 -- nvmf/common.sh@161 -- # true 00:16:45.378 08:02:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:45.378 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:45.378 08:02:51 -- nvmf/common.sh@162 -- # true 00:16:45.378 08:02:51 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:16:45.378 08:02:51 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:45.378 08:02:51 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:45.378 08:02:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:45.378 08:02:51 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:45.378 08:02:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:45.378 08:02:51 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:45.378 08:02:51 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:16:45.378 08:02:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:16:45.378 08:02:51 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:16:45.378 08:02:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:16:45.638 08:02:51 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:16:45.638 08:02:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:16:45.638 08:02:51 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:45.638 08:02:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:45.638 08:02:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:45.638 08:02:51 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:16:45.638 08:02:51 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:16:45.638 08:02:51 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:16:45.638 08:02:51 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:45.638 08:02:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:45.638 08:02:51 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:45.638 08:02:51 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:45.638 08:02:51 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:16:45.638 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:45.638 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.073 ms 00:16:45.638 00:16:45.638 --- 10.0.0.2 ping statistics --- 00:16:45.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.638 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:16:45.638 08:02:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:16:45.638 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:45.638 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:16:45.638 00:16:45.638 --- 10.0.0.3 ping statistics --- 00:16:45.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.638 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:16:45.638 08:02:51 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:45.638 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:45.638 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:16:45.638 00:16:45.638 --- 10.0.0.1 ping statistics --- 00:16:45.638 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:45.638 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:16:45.638 08:02:51 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:45.638 08:02:51 -- nvmf/common.sh@421 -- # return 0 00:16:45.638 08:02:51 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:45.638 08:02:51 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:45.638 08:02:51 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:16:45.638 08:02:51 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:16:45.638 08:02:51 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:45.638 08:02:51 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:16:45.638 08:02:51 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:16:45.638 08:02:51 -- host/digest.sh@134 -- # trap cleanup SIGINT SIGTERM EXIT 00:16:45.638 08:02:51 -- host/digest.sh@135 -- # run_test nvmf_digest_clean run_digest 00:16:45.638 08:02:51 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:16:45.638 08:02:51 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:16:45.638 08:02:51 -- common/autotest_common.sh@10 -- # set +x 00:16:45.638 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:16:45.638 ************************************ 00:16:45.638 START TEST nvmf_digest_clean 00:16:45.638 ************************************ 00:16:45.638 08:02:51 -- common/autotest_common.sh@1104 -- # run_digest 00:16:45.638 08:02:51 -- host/digest.sh@119 -- # nvmfappstart --wait-for-rpc 00:16:45.638 08:02:51 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:45.638 08:02:51 -- common/autotest_common.sh@712 -- # xtrace_disable 00:16:45.638 08:02:51 -- common/autotest_common.sh@10 -- # set +x 00:16:45.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:45.638 08:02:51 -- nvmf/common.sh@469 -- # nvmfpid=79152 00:16:45.638 08:02:51 -- nvmf/common.sh@470 -- # waitforlisten 79152 00:16:45.638 08:02:51 -- common/autotest_common.sh@819 -- # '[' -z 79152 ']' 00:16:45.638 08:02:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:45.638 08:02:51 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:16:45.638 08:02:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:45.638 08:02:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:45.638 08:02:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:45.638 08:02:51 -- common/autotest_common.sh@10 -- # set +x 00:16:45.638 [2024-07-13 08:02:51.399430] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:45.638 [2024-07-13 08:02:51.399547] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:45.905 [2024-07-13 08:02:51.542850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:45.905 [2024-07-13 08:02:51.585475] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:45.905 [2024-07-13 08:02:51.585662] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:45.905 [2024-07-13 08:02:51.585678] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:45.906 [2024-07-13 08:02:51.585688] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:45.906 [2024-07-13 08:02:51.585718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:45.906 08:02:51 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:45.906 08:02:51 -- common/autotest_common.sh@852 -- # return 0 00:16:45.906 08:02:51 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:45.906 08:02:51 -- common/autotest_common.sh@718 -- # xtrace_disable 00:16:45.906 08:02:51 -- common/autotest_common.sh@10 -- # set +x 00:16:45.906 08:02:51 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:45.906 08:02:51 -- host/digest.sh@120 -- # common_target_config 00:16:45.906 08:02:51 -- host/digest.sh@43 -- # rpc_cmd 00:16:45.906 08:02:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:16:45.906 08:02:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.178 null0 00:16:46.178 [2024-07-13 08:02:51.757221] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:46.178 [2024-07-13 08:02:51.781380] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.178 08:02:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:16:46.178 08:02:51 -- host/digest.sh@122 -- # run_bperf randread 4096 128 00:16:46.178 08:02:51 -- host/digest.sh@77 -- # local rw bs qd 00:16:46.178 08:02:51 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:46.178 08:02:51 -- host/digest.sh@80 -- # rw=randread 00:16:46.178 08:02:51 -- host/digest.sh@80 -- # bs=4096 00:16:46.178 08:02:51 -- host/digest.sh@80 -- # qd=128 00:16:46.178 08:02:51 -- host/digest.sh@82 -- # bperfpid=79171 00:16:46.178 08:02:51 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:46.178 08:02:51 -- host/digest.sh@83 -- # waitforlisten 79171 /var/tmp/bperf.sock 00:16:46.178 08:02:51 -- common/autotest_common.sh@819 -- # '[' -z 79171 ']' 00:16:46.178 08:02:51 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:46.178 08:02:51 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:46.178 08:02:51 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:46.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:46.178 08:02:51 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:46.178 08:02:51 -- common/autotest_common.sh@10 -- # set +x 00:16:46.178 [2024-07-13 08:02:51.840323] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:46.178 [2024-07-13 08:02:51.840404] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79171 ] 00:16:46.178 [2024-07-13 08:02:51.977846] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:46.437 [2024-07-13 08:02:52.016950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:46.437 08:02:52 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:46.437 08:02:52 -- common/autotest_common.sh@852 -- # return 0 00:16:46.437 08:02:52 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:46.437 08:02:52 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:46.437 08:02:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:46.697 08:02:52 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:46.697 08:02:52 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:46.956 nvme0n1 00:16:47.216 08:02:52 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:47.216 08:02:52 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:47.216 Running I/O for 2 seconds... 00:16:49.143 00:16:49.143 Latency(us) 00:16:49.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.143 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:16:49.143 nvme0n1 : 2.01 12998.94 50.78 0.00 0.00 9839.88 8936.73 20256.58 00:16:49.144 =================================================================================================================== 00:16:49.144 Total : 12998.94 50.78 0.00 0.00 9839.88 8936.73 20256.58 00:16:49.144 0 00:16:49.144 08:02:54 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:49.144 08:02:54 -- host/digest.sh@92 -- # get_accel_stats 00:16:49.144 08:02:54 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:49.144 08:02:54 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:49.144 | select(.opcode=="crc32c") 00:16:49.144 | "\(.module_name) \(.executed)"' 00:16:49.144 08:02:54 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:49.711 08:02:55 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:49.711 08:02:55 -- host/digest.sh@93 -- # exp_module=software 00:16:49.711 08:02:55 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:49.711 08:02:55 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:49.711 08:02:55 -- host/digest.sh@97 -- # killprocess 79171 00:16:49.711 08:02:55 -- common/autotest_common.sh@926 -- # '[' -z 79171 ']' 00:16:49.711 08:02:55 -- common/autotest_common.sh@930 -- # kill -0 79171 00:16:49.711 08:02:55 -- common/autotest_common.sh@931 -- # uname 00:16:49.711 08:02:55 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:49.711 08:02:55 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79171 00:16:49.711 killing process with pid 79171 00:16:49.711 Received shutdown signal, test time was about 2.000000 seconds 00:16:49.711 00:16:49.711 Latency(us) 00:16:49.711 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:49.711 =================================================================================================================== 00:16:49.712 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:49.712 08:02:55 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:49.712 08:02:55 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:49.712 08:02:55 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79171' 00:16:49.712 08:02:55 -- common/autotest_common.sh@945 -- # kill 79171 00:16:49.712 08:02:55 -- common/autotest_common.sh@950 -- # wait 79171 00:16:49.712 08:02:55 -- host/digest.sh@123 -- # run_bperf randread 131072 16 00:16:49.712 08:02:55 -- host/digest.sh@77 -- # local rw bs qd 00:16:49.712 08:02:55 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:49.712 08:02:55 -- host/digest.sh@80 -- # rw=randread 00:16:49.712 08:02:55 -- host/digest.sh@80 -- # bs=131072 00:16:49.712 08:02:55 -- host/digest.sh@80 -- # qd=16 00:16:49.712 08:02:55 -- host/digest.sh@82 -- # bperfpid=79204 00:16:49.712 08:02:55 -- host/digest.sh@83 -- # waitforlisten 79204 /var/tmp/bperf.sock 00:16:49.712 08:02:55 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:49.712 08:02:55 -- common/autotest_common.sh@819 -- # '[' -z 79204 ']' 00:16:49.712 08:02:55 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:49.712 08:02:55 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:49.712 08:02:55 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:49.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:49.712 08:02:55 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:49.712 08:02:55 -- common/autotest_common.sh@10 -- # set +x 00:16:49.712 [2024-07-13 08:02:55.489699] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:49.712 [2024-07-13 08:02:55.490032] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79204 ] 00:16:49.712 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:49.712 Zero copy mechanism will not be used. 00:16:49.970 [2024-07-13 08:02:55.625712] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.970 [2024-07-13 08:02:55.666503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:49.970 08:02:55 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:49.970 08:02:55 -- common/autotest_common.sh@852 -- # return 0 00:16:49.970 08:02:55 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:49.970 08:02:55 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:49.970 08:02:55 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:50.230 08:02:56 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:50.230 08:02:56 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:50.798 nvme0n1 00:16:50.798 08:02:56 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:50.798 08:02:56 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:50.798 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:50.798 Zero copy mechanism will not be used. 00:16:50.798 Running I/O for 2 seconds... 00:16:52.704 00:16:52.704 Latency(us) 00:16:52.704 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.704 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:16:52.704 nvme0n1 : 2.00 8178.35 1022.29 0.00 0.00 1953.65 1601.16 3515.11 00:16:52.704 =================================================================================================================== 00:16:52.704 Total : 8178.35 1022.29 0.00 0.00 1953.65 1601.16 3515.11 00:16:52.704 0 00:16:52.704 08:02:58 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:52.704 08:02:58 -- host/digest.sh@92 -- # get_accel_stats 00:16:52.704 08:02:58 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:52.704 08:02:58 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:52.704 08:02:58 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:52.704 | select(.opcode=="crc32c") 00:16:52.704 | "\(.module_name) \(.executed)"' 00:16:52.964 08:02:58 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:52.964 08:02:58 -- host/digest.sh@93 -- # exp_module=software 00:16:52.964 08:02:58 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:52.964 08:02:58 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:52.964 08:02:58 -- host/digest.sh@97 -- # killprocess 79204 00:16:52.964 08:02:58 -- common/autotest_common.sh@926 -- # '[' -z 79204 ']' 00:16:52.964 08:02:58 -- common/autotest_common.sh@930 -- # kill -0 79204 00:16:52.964 08:02:58 -- common/autotest_common.sh@931 -- # uname 00:16:52.964 08:02:58 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:52.964 08:02:58 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79204 00:16:52.964 killing process with pid 79204 00:16:52.964 Received shutdown signal, test time was about 2.000000 seconds 00:16:52.964 00:16:52.964 Latency(us) 00:16:52.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:52.964 =================================================================================================================== 00:16:52.964 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:52.964 08:02:58 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:52.964 08:02:58 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:52.964 08:02:58 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79204' 00:16:52.964 08:02:58 -- common/autotest_common.sh@945 -- # kill 79204 00:16:52.964 08:02:58 -- common/autotest_common.sh@950 -- # wait 79204 00:16:53.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:53.223 08:02:58 -- host/digest.sh@124 -- # run_bperf randwrite 4096 128 00:16:53.223 08:02:58 -- host/digest.sh@77 -- # local rw bs qd 00:16:53.223 08:02:58 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:53.223 08:02:58 -- host/digest.sh@80 -- # rw=randwrite 00:16:53.223 08:02:58 -- host/digest.sh@80 -- # bs=4096 00:16:53.223 08:02:58 -- host/digest.sh@80 -- # qd=128 00:16:53.223 08:02:58 -- host/digest.sh@82 -- # bperfpid=79233 00:16:53.223 08:02:58 -- host/digest.sh@83 -- # waitforlisten 79233 /var/tmp/bperf.sock 00:16:53.223 08:02:58 -- common/autotest_common.sh@819 -- # '[' -z 79233 ']' 00:16:53.223 08:02:58 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:16:53.223 08:02:58 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:53.223 08:02:58 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:53.223 08:02:58 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:53.223 08:02:58 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:53.223 08:02:58 -- common/autotest_common.sh@10 -- # set +x 00:16:53.223 [2024-07-13 08:02:58.927522] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:53.223 [2024-07-13 08:02:58.928295] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79233 ] 00:16:53.482 [2024-07-13 08:02:59.063482] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.482 [2024-07-13 08:02:59.095120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:53.482 08:02:59 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:53.482 08:02:59 -- common/autotest_common.sh@852 -- # return 0 00:16:53.482 08:02:59 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:53.482 08:02:59 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:53.482 08:02:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:53.740 08:02:59 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:53.740 08:02:59 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:53.999 nvme0n1 00:16:53.999 08:02:59 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:53.999 08:02:59 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:53.999 Running I/O for 2 seconds... 00:16:56.531 00:16:56.531 Latency(us) 00:16:56.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.531 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:56.531 nvme0n1 : 2.00 16119.85 62.97 0.00 0.00 7933.78 6553.60 19779.96 00:16:56.531 =================================================================================================================== 00:16:56.531 Total : 16119.85 62.97 0.00 0.00 7933.78 6553.60 19779.96 00:16:56.531 0 00:16:56.531 08:03:01 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:16:56.531 08:03:01 -- host/digest.sh@92 -- # get_accel_stats 00:16:56.531 08:03:01 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:16:56.531 08:03:01 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:16:56.531 08:03:01 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:16:56.531 | select(.opcode=="crc32c") 00:16:56.531 | "\(.module_name) \(.executed)"' 00:16:56.531 08:03:02 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:16:56.531 08:03:02 -- host/digest.sh@93 -- # exp_module=software 00:16:56.531 08:03:02 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:16:56.531 08:03:02 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:16:56.531 08:03:02 -- host/digest.sh@97 -- # killprocess 79233 00:16:56.531 08:03:02 -- common/autotest_common.sh@926 -- # '[' -z 79233 ']' 00:16:56.531 08:03:02 -- common/autotest_common.sh@930 -- # kill -0 79233 00:16:56.531 08:03:02 -- common/autotest_common.sh@931 -- # uname 00:16:56.531 08:03:02 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:16:56.531 08:03:02 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79233 00:16:56.531 killing process with pid 79233 00:16:56.531 Received shutdown signal, test time was about 2.000000 seconds 00:16:56.531 00:16:56.531 Latency(us) 00:16:56.531 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:56.531 =================================================================================================================== 00:16:56.531 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:56.531 08:03:02 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:16:56.531 08:03:02 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:16:56.531 08:03:02 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79233' 00:16:56.531 08:03:02 -- common/autotest_common.sh@945 -- # kill 79233 00:16:56.531 08:03:02 -- common/autotest_common.sh@950 -- # wait 79233 00:16:56.531 08:03:02 -- host/digest.sh@125 -- # run_bperf randwrite 131072 16 00:16:56.531 08:03:02 -- host/digest.sh@77 -- # local rw bs qd 00:16:56.531 08:03:02 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:16:56.531 08:03:02 -- host/digest.sh@80 -- # rw=randwrite 00:16:56.531 08:03:02 -- host/digest.sh@80 -- # bs=131072 00:16:56.531 08:03:02 -- host/digest.sh@80 -- # qd=16 00:16:56.531 08:03:02 -- host/digest.sh@82 -- # bperfpid=79258 00:16:56.531 08:03:02 -- host/digest.sh@81 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:16:56.531 08:03:02 -- host/digest.sh@83 -- # waitforlisten 79258 /var/tmp/bperf.sock 00:16:56.531 08:03:02 -- common/autotest_common.sh@819 -- # '[' -z 79258 ']' 00:16:56.531 08:03:02 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:16:56.531 08:03:02 -- common/autotest_common.sh@824 -- # local max_retries=100 00:16:56.531 08:03:02 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:16:56.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:16:56.531 08:03:02 -- common/autotest_common.sh@828 -- # xtrace_disable 00:16:56.531 08:03:02 -- common/autotest_common.sh@10 -- # set +x 00:16:56.531 [2024-07-13 08:03:02.300157] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:16:56.531 [2024-07-13 08:03:02.300388] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79258 ] 00:16:56.531 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:56.531 Zero copy mechanism will not be used. 00:16:56.790 [2024-07-13 08:03:02.436686] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:56.790 [2024-07-13 08:03:02.472616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.791 08:03:02 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:16:56.791 08:03:02 -- common/autotest_common.sh@852 -- # return 0 00:16:56.791 08:03:02 -- host/digest.sh@85 -- # [[ 0 -eq 1 ]] 00:16:56.791 08:03:02 -- host/digest.sh@86 -- # bperf_rpc framework_start_init 00:16:56.791 08:03:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:16:57.357 08:03:02 -- host/digest.sh@88 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:57.357 08:03:02 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:16:57.616 nvme0n1 00:16:57.616 08:03:03 -- host/digest.sh@91 -- # bperf_py perform_tests 00:16:57.616 08:03:03 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:16:57.616 I/O size of 131072 is greater than zero copy threshold (65536). 00:16:57.616 Zero copy mechanism will not be used. 00:16:57.616 Running I/O for 2 seconds... 00:17:00.146 00:17:00.146 Latency(us) 00:17:00.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.146 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:00.146 nvme0n1 : 2.00 5744.66 718.08 0.00 0.00 2779.05 2263.97 7268.54 00:17:00.146 =================================================================================================================== 00:17:00.146 Total : 5744.66 718.08 0.00 0.00 2779.05 2263.97 7268.54 00:17:00.146 0 00:17:00.146 08:03:05 -- host/digest.sh@92 -- # read -r acc_module acc_executed 00:17:00.146 08:03:05 -- host/digest.sh@92 -- # get_accel_stats 00:17:00.146 08:03:05 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:17:00.146 08:03:05 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:17:00.146 08:03:05 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:17:00.146 | select(.opcode=="crc32c") 00:17:00.146 | "\(.module_name) \(.executed)"' 00:17:00.146 08:03:05 -- host/digest.sh@93 -- # [[ 0 -eq 1 ]] 00:17:00.146 08:03:05 -- host/digest.sh@93 -- # exp_module=software 00:17:00.146 08:03:05 -- host/digest.sh@94 -- # (( acc_executed > 0 )) 00:17:00.146 08:03:05 -- host/digest.sh@95 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:17:00.146 08:03:05 -- host/digest.sh@97 -- # killprocess 79258 00:17:00.146 08:03:05 -- common/autotest_common.sh@926 -- # '[' -z 79258 ']' 00:17:00.146 08:03:05 -- common/autotest_common.sh@930 -- # kill -0 79258 00:17:00.146 08:03:05 -- common/autotest_common.sh@931 -- # uname 00:17:00.146 08:03:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:00.146 08:03:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79258 00:17:00.146 killing process with pid 79258 00:17:00.146 Received shutdown signal, test time was about 2.000000 seconds 00:17:00.146 00:17:00.146 Latency(us) 00:17:00.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:00.146 =================================================================================================================== 00:17:00.146 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:00.146 08:03:05 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:00.146 08:03:05 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:00.146 08:03:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79258' 00:17:00.146 08:03:05 -- common/autotest_common.sh@945 -- # kill 79258 00:17:00.146 08:03:05 -- common/autotest_common.sh@950 -- # wait 79258 00:17:00.146 08:03:05 -- host/digest.sh@126 -- # killprocess 79152 00:17:00.146 08:03:05 -- common/autotest_common.sh@926 -- # '[' -z 79152 ']' 00:17:00.147 08:03:05 -- common/autotest_common.sh@930 -- # kill -0 79152 00:17:00.147 08:03:05 -- common/autotest_common.sh@931 -- # uname 00:17:00.147 08:03:05 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:00.147 08:03:05 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79152 00:17:00.147 killing process with pid 79152 00:17:00.147 08:03:05 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:00.147 08:03:05 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:00.147 08:03:05 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79152' 00:17:00.147 08:03:05 -- common/autotest_common.sh@945 -- # kill 79152 00:17:00.147 08:03:05 -- common/autotest_common.sh@950 -- # wait 79152 00:17:00.405 00:17:00.405 real 0m14.672s 00:17:00.405 user 0m28.322s 00:17:00.405 sys 0m4.382s 00:17:00.405 ************************************ 00:17:00.405 END TEST nvmf_digest_clean 00:17:00.405 ************************************ 00:17:00.405 08:03:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.405 08:03:06 -- common/autotest_common.sh@10 -- # set +x 00:17:00.405 08:03:06 -- host/digest.sh@136 -- # run_test nvmf_digest_error run_digest_error 00:17:00.405 08:03:06 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:17:00.405 08:03:06 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:00.405 08:03:06 -- common/autotest_common.sh@10 -- # set +x 00:17:00.405 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:17:00.405 ************************************ 00:17:00.405 START TEST nvmf_digest_error 00:17:00.405 ************************************ 00:17:00.405 08:03:06 -- common/autotest_common.sh@1104 -- # run_digest_error 00:17:00.405 08:03:06 -- host/digest.sh@101 -- # nvmfappstart --wait-for-rpc 00:17:00.405 08:03:06 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:00.405 08:03:06 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:00.405 08:03:06 -- common/autotest_common.sh@10 -- # set +x 00:17:00.405 08:03:06 -- nvmf/common.sh@469 -- # nvmfpid=79315 00:17:00.405 08:03:06 -- nvmf/common.sh@470 -- # waitforlisten 79315 00:17:00.405 08:03:06 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:17:00.405 08:03:06 -- common/autotest_common.sh@819 -- # '[' -z 79315 ']' 00:17:00.405 08:03:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:00.405 08:03:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:00.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:00.405 08:03:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:00.405 08:03:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:00.405 08:03:06 -- common/autotest_common.sh@10 -- # set +x 00:17:00.405 [2024-07-13 08:03:06.126521] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:00.405 [2024-07-13 08:03:06.126629] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:00.664 [2024-07-13 08:03:06.270022] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.664 [2024-07-13 08:03:06.309625] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:00.664 [2024-07-13 08:03:06.309848] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:00.664 [2024-07-13 08:03:06.309862] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:00.664 [2024-07-13 08:03:06.309871] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:00.664 [2024-07-13 08:03:06.309930] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.664 08:03:06 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:00.664 08:03:06 -- common/autotest_common.sh@852 -- # return 0 00:17:00.664 08:03:06 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:00.664 08:03:06 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:00.664 08:03:06 -- common/autotest_common.sh@10 -- # set +x 00:17:00.664 08:03:06 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:00.664 08:03:06 -- host/digest.sh@103 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:17:00.664 08:03:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:00.664 08:03:06 -- common/autotest_common.sh@10 -- # set +x 00:17:00.664 [2024-07-13 08:03:06.406334] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:17:00.664 08:03:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:00.664 08:03:06 -- host/digest.sh@104 -- # common_target_config 00:17:00.664 08:03:06 -- host/digest.sh@43 -- # rpc_cmd 00:17:00.664 08:03:06 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:00.664 08:03:06 -- common/autotest_common.sh@10 -- # set +x 00:17:00.923 null0 00:17:00.923 [2024-07-13 08:03:06.487404] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:00.923 [2024-07-13 08:03:06.511615] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.923 08:03:06 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:00.924 08:03:06 -- host/digest.sh@107 -- # run_bperf_err randread 4096 128 00:17:00.924 08:03:06 -- host/digest.sh@54 -- # local rw bs qd 00:17:00.924 08:03:06 -- host/digest.sh@56 -- # rw=randread 00:17:00.924 08:03:06 -- host/digest.sh@56 -- # bs=4096 00:17:00.924 08:03:06 -- host/digest.sh@56 -- # qd=128 00:17:00.924 08:03:06 -- host/digest.sh@58 -- # bperfpid=79334 00:17:00.924 08:03:06 -- host/digest.sh@60 -- # waitforlisten 79334 /var/tmp/bperf.sock 00:17:00.924 08:03:06 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:17:00.924 08:03:06 -- common/autotest_common.sh@819 -- # '[' -z 79334 ']' 00:17:00.924 08:03:06 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:00.924 08:03:06 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:00.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:00.924 08:03:06 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:00.924 08:03:06 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:00.924 08:03:06 -- common/autotest_common.sh@10 -- # set +x 00:17:00.924 [2024-07-13 08:03:06.565756] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:00.924 [2024-07-13 08:03:06.565854] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79334 ] 00:17:00.924 [2024-07-13 08:03:06.700870] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.924 [2024-07-13 08:03:06.737487] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:01.859 08:03:07 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:01.859 08:03:07 -- common/autotest_common.sh@852 -- # return 0 00:17:01.859 08:03:07 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:01.859 08:03:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:02.118 08:03:07 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:02.118 08:03:07 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.118 08:03:07 -- common/autotest_common.sh@10 -- # set +x 00:17:02.118 08:03:07 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.118 08:03:07 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:02.118 08:03:07 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:02.376 nvme0n1 00:17:02.376 08:03:08 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:02.376 08:03:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:02.376 08:03:08 -- common/autotest_common.sh@10 -- # set +x 00:17:02.376 08:03:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:02.376 08:03:08 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:02.376 08:03:08 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:02.663 Running I/O for 2 seconds... 00:17:02.663 [2024-07-13 08:03:08.316790] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.663 [2024-07-13 08:03:08.316841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.663 [2024-07-13 08:03:08.316873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.663 [2024-07-13 08:03:08.335684] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.663 [2024-07-13 08:03:08.335722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.663 [2024-07-13 08:03:08.335752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.663 [2024-07-13 08:03:08.354618] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.663 [2024-07-13 08:03:08.354654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.663 [2024-07-13 08:03:08.354684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.663 [2024-07-13 08:03:08.373622] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.663 [2024-07-13 08:03:08.373676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.663 [2024-07-13 08:03:08.373706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.663 [2024-07-13 08:03:08.392245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.663 [2024-07-13 08:03:08.392285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.663 [2024-07-13 08:03:08.392299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.663 [2024-07-13 08:03:08.409844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.663 [2024-07-13 08:03:08.409884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.663 [2024-07-13 08:03:08.409898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.663 [2024-07-13 08:03:08.429605] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.663 [2024-07-13 08:03:08.429660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.663 [2024-07-13 08:03:08.429674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.663 [2024-07-13 08:03:08.449587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.663 [2024-07-13 08:03:08.449627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15230 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.663 [2024-07-13 08:03:08.449642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.945 [2024-07-13 08:03:08.468217] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.945 [2024-07-13 08:03:08.468256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.945 [2024-07-13 08:03:08.468269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.945 [2024-07-13 08:03:08.485857] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.945 [2024-07-13 08:03:08.485920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21782 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.945 [2024-07-13 08:03:08.485935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.945 [2024-07-13 08:03:08.505758] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.945 [2024-07-13 08:03:08.505846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.945 [2024-07-13 08:03:08.505891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.945 [2024-07-13 08:03:08.526375] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.945 [2024-07-13 08:03:08.526415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:12236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.945 [2024-07-13 08:03:08.526429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.945 [2024-07-13 08:03:08.545700] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.945 [2024-07-13 08:03:08.545754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:6614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.945 [2024-07-13 08:03:08.545784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.945 [2024-07-13 08:03:08.565816] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.945 [2024-07-13 08:03:08.565911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.945 [2024-07-13 08:03:08.565926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.945 [2024-07-13 08:03:08.585081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.945 [2024-07-13 08:03:08.585149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.945 [2024-07-13 08:03:08.585162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.945 [2024-07-13 08:03:08.604220] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.945 [2024-07-13 08:03:08.604273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:8705 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.945 [2024-07-13 08:03:08.604287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.945 [2024-07-13 08:03:08.623668] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.945 [2024-07-13 08:03:08.623721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.945 [2024-07-13 08:03:08.623750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.945 [2024-07-13 08:03:08.642673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.945 [2024-07-13 08:03:08.642725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.945 [2024-07-13 08:03:08.642754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.945 [2024-07-13 08:03:08.661971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.945 [2024-07-13 08:03:08.662023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.945 [2024-07-13 08:03:08.662037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.945 [2024-07-13 08:03:08.680599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.945 [2024-07-13 08:03:08.680654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:14608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.945 [2024-07-13 08:03:08.680683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.945 [2024-07-13 08:03:08.700542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.945 [2024-07-13 08:03:08.700629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.945 [2024-07-13 08:03:08.700643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.945 [2024-07-13 08:03:08.719754] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.945 [2024-07-13 08:03:08.719811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.945 [2024-07-13 08:03:08.719826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.945 [2024-07-13 08:03:08.738773] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.945 [2024-07-13 08:03:08.738829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:7103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.945 [2024-07-13 08:03:08.738843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:02.945 [2024-07-13 08:03:08.758015] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:02.945 [2024-07-13 08:03:08.758065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:12760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:02.945 [2024-07-13 08:03:08.758079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.204 [2024-07-13 08:03:08.777076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.204 [2024-07-13 08:03:08.777150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.204 [2024-07-13 08:03:08.777165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.204 [2024-07-13 08:03:08.795521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.204 [2024-07-13 08:03:08.795590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:13395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.204 [2024-07-13 08:03:08.795604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.204 [2024-07-13 08:03:08.815122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.204 [2024-07-13 08:03:08.815167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:19495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.204 [2024-07-13 08:03:08.815181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.204 [2024-07-13 08:03:08.833822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.204 [2024-07-13 08:03:08.833881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:20781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.204 [2024-07-13 08:03:08.833896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.204 [2024-07-13 08:03:08.852623] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.204 [2024-07-13 08:03:08.852671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.204 [2024-07-13 08:03:08.852686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.204 [2024-07-13 08:03:08.872194] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.205 [2024-07-13 08:03:08.872249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.205 [2024-07-13 08:03:08.872278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.205 [2024-07-13 08:03:08.893190] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.205 [2024-07-13 08:03:08.893257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.205 [2024-07-13 08:03:08.893286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.205 [2024-07-13 08:03:08.914234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.205 [2024-07-13 08:03:08.914272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:11726 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.205 [2024-07-13 08:03:08.914286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.205 [2024-07-13 08:03:08.935112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.205 [2024-07-13 08:03:08.935146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:12331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.205 [2024-07-13 08:03:08.935174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.205 [2024-07-13 08:03:08.955508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.205 [2024-07-13 08:03:08.955546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.205 [2024-07-13 08:03:08.955561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.205 [2024-07-13 08:03:08.975571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.205 [2024-07-13 08:03:08.975607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:11207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.205 [2024-07-13 08:03:08.975637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.205 [2024-07-13 08:03:08.996294] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.205 [2024-07-13 08:03:08.996331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22127 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.205 [2024-07-13 08:03:08.996359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.205 [2024-07-13 08:03:09.017213] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.205 [2024-07-13 08:03:09.017282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.205 [2024-07-13 08:03:09.017295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.464 [2024-07-13 08:03:09.036503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.464 [2024-07-13 08:03:09.036539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:19276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.464 [2024-07-13 08:03:09.036553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.464 [2024-07-13 08:03:09.055349] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.464 [2024-07-13 08:03:09.055418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.464 [2024-07-13 08:03:09.055447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.464 [2024-07-13 08:03:09.074489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.464 [2024-07-13 08:03:09.074527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.464 [2024-07-13 08:03:09.074540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.464 [2024-07-13 08:03:09.093506] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.464 [2024-07-13 08:03:09.093542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.464 [2024-07-13 08:03:09.093555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.464 [2024-07-13 08:03:09.112619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.464 [2024-07-13 08:03:09.112655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.464 [2024-07-13 08:03:09.112685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.464 [2024-07-13 08:03:09.132291] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.464 [2024-07-13 08:03:09.132328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:19019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.464 [2024-07-13 08:03:09.132357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.464 [2024-07-13 08:03:09.151241] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.464 [2024-07-13 08:03:09.151277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.464 [2024-07-13 08:03:09.151306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.464 [2024-07-13 08:03:09.171286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.464 [2024-07-13 08:03:09.171323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.464 [2024-07-13 08:03:09.171369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.464 [2024-07-13 08:03:09.190237] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.464 [2024-07-13 08:03:09.190276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:7802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.464 [2024-07-13 08:03:09.190290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.464 [2024-07-13 08:03:09.209227] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.464 [2024-07-13 08:03:09.209278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:20984 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.464 [2024-07-13 08:03:09.209307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.464 [2024-07-13 08:03:09.228484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.464 [2024-07-13 08:03:09.228520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.464 [2024-07-13 08:03:09.228564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.464 [2024-07-13 08:03:09.247408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.464 [2024-07-13 08:03:09.247460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.464 [2024-07-13 08:03:09.247473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.464 [2024-07-13 08:03:09.266590] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.464 [2024-07-13 08:03:09.266626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.464 [2024-07-13 08:03:09.266639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.723 [2024-07-13 08:03:09.286113] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.723 [2024-07-13 08:03:09.286173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:2466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.723 [2024-07-13 08:03:09.286187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.723 [2024-07-13 08:03:09.305678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.723 [2024-07-13 08:03:09.305732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.723 [2024-07-13 08:03:09.305762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.723 [2024-07-13 08:03:09.324320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.723 [2024-07-13 08:03:09.324364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:17399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.723 [2024-07-13 08:03:09.324378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.723 [2024-07-13 08:03:09.344397] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.723 [2024-07-13 08:03:09.344450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.723 [2024-07-13 08:03:09.344494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.723 [2024-07-13 08:03:09.363549] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.723 [2024-07-13 08:03:09.363586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.723 [2024-07-13 08:03:09.363616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.723 [2024-07-13 08:03:09.382268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.723 [2024-07-13 08:03:09.382306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:9061 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.723 [2024-07-13 08:03:09.382320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.723 [2024-07-13 08:03:09.401302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.723 [2024-07-13 08:03:09.401369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:6341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.723 [2024-07-13 08:03:09.401398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.723 [2024-07-13 08:03:09.419619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.723 [2024-07-13 08:03:09.419657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.723 [2024-07-13 08:03:09.419671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.723 [2024-07-13 08:03:09.438588] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.723 [2024-07-13 08:03:09.438654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.723 [2024-07-13 08:03:09.438666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.723 [2024-07-13 08:03:09.457810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.723 [2024-07-13 08:03:09.457872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:18703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.723 [2024-07-13 08:03:09.457886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.723 [2024-07-13 08:03:09.477044] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.723 [2024-07-13 08:03:09.477103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:12698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.723 [2024-07-13 08:03:09.477133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.723 [2024-07-13 08:03:09.495873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.723 [2024-07-13 08:03:09.495918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:18267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.724 [2024-07-13 08:03:09.495932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.724 [2024-07-13 08:03:09.515262] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.724 [2024-07-13 08:03:09.515319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:9756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.724 [2024-07-13 08:03:09.515363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.983 [2024-07-13 08:03:09.543347] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.983 [2024-07-13 08:03:09.543382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.983 [2024-07-13 08:03:09.543411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.983 [2024-07-13 08:03:09.563006] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.983 [2024-07-13 08:03:09.563060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.983 [2024-07-13 08:03:09.563105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.983 [2024-07-13 08:03:09.583576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.983 [2024-07-13 08:03:09.583628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.983 [2024-07-13 08:03:09.583657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.983 [2024-07-13 08:03:09.602748] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.983 [2024-07-13 08:03:09.602807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.983 [2024-07-13 08:03:09.602837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.983 [2024-07-13 08:03:09.622089] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.983 [2024-07-13 08:03:09.622173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.983 [2024-07-13 08:03:09.622188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.983 [2024-07-13 08:03:09.641418] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.983 [2024-07-13 08:03:09.641454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.983 [2024-07-13 08:03:09.641467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.983 [2024-07-13 08:03:09.660283] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.983 [2024-07-13 08:03:09.660319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.983 [2024-07-13 08:03:09.660347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.983 [2024-07-13 08:03:09.679595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.983 [2024-07-13 08:03:09.679647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:20440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.983 [2024-07-13 08:03:09.679676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.983 [2024-07-13 08:03:09.699449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.983 [2024-07-13 08:03:09.699502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.983 [2024-07-13 08:03:09.699545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.983 [2024-07-13 08:03:09.718667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.983 [2024-07-13 08:03:09.718719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:21553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.983 [2024-07-13 08:03:09.718762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.983 [2024-07-13 08:03:09.738344] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.983 [2024-07-13 08:03:09.738382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:23938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.983 [2024-07-13 08:03:09.738396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.983 [2024-07-13 08:03:09.757362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.983 [2024-07-13 08:03:09.757459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:8394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.983 [2024-07-13 08:03:09.757489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.983 [2024-07-13 08:03:09.776266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.983 [2024-07-13 08:03:09.776320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.983 [2024-07-13 08:03:09.776349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:03.983 [2024-07-13 08:03:09.795145] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:03.983 [2024-07-13 08:03:09.795180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:03.983 [2024-07-13 08:03:09.795193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.242 [2024-07-13 08:03:09.814689] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.242 [2024-07-13 08:03:09.814725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.242 [2024-07-13 08:03:09.814754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.242 [2024-07-13 08:03:09.833212] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.242 [2024-07-13 08:03:09.833265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:8768 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.242 [2024-07-13 08:03:09.833309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.242 [2024-07-13 08:03:09.851873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.242 [2024-07-13 08:03:09.851916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.242 [2024-07-13 08:03:09.851945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.242 [2024-07-13 08:03:09.870915] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.242 [2024-07-13 08:03:09.870967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:6431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.242 [2024-07-13 08:03:09.870995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.242 [2024-07-13 08:03:09.890063] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.242 [2024-07-13 08:03:09.890139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:11669 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.242 [2024-07-13 08:03:09.890154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.242 [2024-07-13 08:03:09.908985] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.242 [2024-07-13 08:03:09.909019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.242 [2024-07-13 08:03:09.909047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.242 [2024-07-13 08:03:09.928041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.242 [2024-07-13 08:03:09.928108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.242 [2024-07-13 08:03:09.928122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.242 [2024-07-13 08:03:09.946984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.242 [2024-07-13 08:03:09.947018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.242 [2024-07-13 08:03:09.947030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.242 [2024-07-13 08:03:09.966343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.242 [2024-07-13 08:03:09.966381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.242 [2024-07-13 08:03:09.966395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.242 [2024-07-13 08:03:09.985238] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.242 [2024-07-13 08:03:09.985273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.242 [2024-07-13 08:03:09.985302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.242 [2024-07-13 08:03:10.004161] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.242 [2024-07-13 08:03:10.004215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.242 [2024-07-13 08:03:10.004229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.242 [2024-07-13 08:03:10.022346] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.242 [2024-07-13 08:03:10.022388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.242 [2024-07-13 08:03:10.022402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.242 [2024-07-13 08:03:10.042395] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.242 [2024-07-13 08:03:10.042434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:13734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.242 [2024-07-13 08:03:10.042448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.501 [2024-07-13 08:03:10.063104] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.501 [2024-07-13 08:03:10.063158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.501 [2024-07-13 08:03:10.063172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.501 [2024-07-13 08:03:10.082930] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.501 [2024-07-13 08:03:10.082997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:15944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.501 [2024-07-13 08:03:10.083026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.501 [2024-07-13 08:03:10.102168] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.501 [2024-07-13 08:03:10.102207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.501 [2024-07-13 08:03:10.102221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.501 [2024-07-13 08:03:10.120890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.501 [2024-07-13 08:03:10.120950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.501 [2024-07-13 08:03:10.120980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.501 [2024-07-13 08:03:10.140122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.501 [2024-07-13 08:03:10.140173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:13764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.501 [2024-07-13 08:03:10.140185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.501 [2024-07-13 08:03:10.159843] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.501 [2024-07-13 08:03:10.159916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.501 [2024-07-13 08:03:10.159944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.501 [2024-07-13 08:03:10.178667] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.501 [2024-07-13 08:03:10.178724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:25415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.501 [2024-07-13 08:03:10.178753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.501 [2024-07-13 08:03:10.198039] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.501 [2024-07-13 08:03:10.198075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.501 [2024-07-13 08:03:10.198105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.501 [2024-07-13 08:03:10.217297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.501 [2024-07-13 08:03:10.217348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.501 [2024-07-13 08:03:10.217362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.501 [2024-07-13 08:03:10.236690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.501 [2024-07-13 08:03:10.236758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:21350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.501 [2024-07-13 08:03:10.236771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.501 [2024-07-13 08:03:10.255571] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.501 [2024-07-13 08:03:10.255622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14796 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.501 [2024-07-13 08:03:10.255635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.501 [2024-07-13 08:03:10.274879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.501 [2024-07-13 08:03:10.274914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:1482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.501 [2024-07-13 08:03:10.274944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.501 [2024-07-13 08:03:10.293997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x151f3a0) 00:17:04.501 [2024-07-13 08:03:10.294065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:04.501 [2024-07-13 08:03:10.294093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:04.501 00:17:04.501 Latency(us) 00:17:04.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.501 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:17:04.501 nvme0n1 : 2.01 13076.75 51.08 0.00 0.00 9780.86 8579.26 37891.72 00:17:04.501 =================================================================================================================== 00:17:04.501 Total : 13076.75 51.08 0.00 0.00 9780.86 8579.26 37891.72 00:17:04.501 0 00:17:04.761 08:03:10 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:04.761 08:03:10 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:04.761 08:03:10 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:04.761 08:03:10 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:04.761 | .driver_specific 00:17:04.761 | .nvme_error 00:17:04.761 | .status_code 00:17:04.761 | .command_transient_transport_error' 00:17:05.020 08:03:10 -- host/digest.sh@71 -- # (( 103 > 0 )) 00:17:05.020 08:03:10 -- host/digest.sh@73 -- # killprocess 79334 00:17:05.020 08:03:10 -- common/autotest_common.sh@926 -- # '[' -z 79334 ']' 00:17:05.020 08:03:10 -- common/autotest_common.sh@930 -- # kill -0 79334 00:17:05.020 08:03:10 -- common/autotest_common.sh@931 -- # uname 00:17:05.020 08:03:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:05.020 08:03:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79334 00:17:05.020 08:03:10 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:05.020 killing process with pid 79334 00:17:05.020 08:03:10 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:05.020 08:03:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79334' 00:17:05.020 Received shutdown signal, test time was about 2.000000 seconds 00:17:05.020 00:17:05.020 Latency(us) 00:17:05.020 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:05.020 =================================================================================================================== 00:17:05.020 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:05.020 08:03:10 -- common/autotest_common.sh@945 -- # kill 79334 00:17:05.020 08:03:10 -- common/autotest_common.sh@950 -- # wait 79334 00:17:05.020 08:03:10 -- host/digest.sh@108 -- # run_bperf_err randread 131072 16 00:17:05.020 08:03:10 -- host/digest.sh@54 -- # local rw bs qd 00:17:05.020 08:03:10 -- host/digest.sh@56 -- # rw=randread 00:17:05.020 08:03:10 -- host/digest.sh@56 -- # bs=131072 00:17:05.020 08:03:10 -- host/digest.sh@56 -- # qd=16 00:17:05.020 08:03:10 -- host/digest.sh@58 -- # bperfpid=79371 00:17:05.020 08:03:10 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:17:05.020 08:03:10 -- host/digest.sh@60 -- # waitforlisten 79371 /var/tmp/bperf.sock 00:17:05.020 08:03:10 -- common/autotest_common.sh@819 -- # '[' -z 79371 ']' 00:17:05.020 08:03:10 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:05.020 08:03:10 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:05.020 08:03:10 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:05.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:05.020 08:03:10 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:05.020 08:03:10 -- common/autotest_common.sh@10 -- # set +x 00:17:05.020 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:05.020 Zero copy mechanism will not be used. 00:17:05.020 [2024-07-13 08:03:10.812526] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:05.020 [2024-07-13 08:03:10.812617] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79371 ] 00:17:05.279 [2024-07-13 08:03:10.948106] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.279 [2024-07-13 08:03:10.984434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.216 08:03:11 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:06.216 08:03:11 -- common/autotest_common.sh@852 -- # return 0 00:17:06.216 08:03:11 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:06.216 08:03:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:06.474 08:03:12 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:06.474 08:03:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:06.474 08:03:12 -- common/autotest_common.sh@10 -- # set +x 00:17:06.474 08:03:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:06.474 08:03:12 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:06.474 08:03:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:06.733 nvme0n1 00:17:06.733 08:03:12 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:06.733 08:03:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:06.733 08:03:12 -- common/autotest_common.sh@10 -- # set +x 00:17:06.733 08:03:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:06.733 08:03:12 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:06.733 08:03:12 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:06.993 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:06.993 Zero copy mechanism will not be used. 00:17:06.993 Running I/O for 2 seconds... 00:17:06.993 [2024-07-13 08:03:12.576670] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.576759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.576775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.580965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.581001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.581014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.585193] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.585228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.585257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.589023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.589057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.589085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.592965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.593001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.593030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.597199] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.597234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.597263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.601804] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.601888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.601904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.606226] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.606266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.606281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.610834] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.610898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.610928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.615305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.615341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.615370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.619971] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.620021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.620050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.624778] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.624847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.624867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.629743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.629810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.629842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.633951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.633989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.634017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.638343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.638385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.638399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.642472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.642560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.642590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.646662] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.646702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.646732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.650903] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.650941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.650972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.655186] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.655220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.655249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.659297] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.659331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.993 [2024-07-13 08:03:12.659359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.993 [2024-07-13 08:03:12.663460] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.993 [2024-07-13 08:03:12.663494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.663523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.667690] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.667726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.667754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.671675] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.671710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.671738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.675753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.675814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.675845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.679797] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.679847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.679875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.683831] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.683870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.683899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.687874] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.687912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.687941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.691865] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.691901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.691930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.695844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.695877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.695905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.699600] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.699638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.699667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.703733] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.703800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.703830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.707724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.707762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.707798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.711701] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.711736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.711764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.715777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.715838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.715869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.719753] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.719814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.719828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.723628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.723669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.723698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.727651] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.727692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.727722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.731636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.731673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.731701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.735705] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.735743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.735773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.739741] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.739806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.739820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.743556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.743593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.743623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.747569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.747603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.747631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.751677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.751713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.751741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.755984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.756019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.756048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.760428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.760465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.760506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.764730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.764766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.764839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.769069] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.769121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.769165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.773360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.773395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.773424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.777747] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.777828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.994 [2024-07-13 08:03:12.777844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.994 [2024-07-13 08:03:12.782014] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.994 [2024-07-13 08:03:12.782050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.995 [2024-07-13 08:03:12.782079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.995 [2024-07-13 08:03:12.786308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.995 [2024-07-13 08:03:12.786348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.995 [2024-07-13 08:03:12.786362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:06.995 [2024-07-13 08:03:12.790479] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.995 [2024-07-13 08:03:12.790520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.995 [2024-07-13 08:03:12.790564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:06.995 [2024-07-13 08:03:12.795016] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.995 [2024-07-13 08:03:12.795056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.995 [2024-07-13 08:03:12.795085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:06.995 [2024-07-13 08:03:12.799041] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.995 [2024-07-13 08:03:12.799077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.995 [2024-07-13 08:03:12.799105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:06.995 [2024-07-13 08:03:12.803162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:06.995 [2024-07-13 08:03:12.803217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:06.995 [2024-07-13 08:03:12.803246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.253 [2024-07-13 08:03:12.807715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.253 [2024-07-13 08:03:12.807769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.253 [2024-07-13 08:03:12.807799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.253 [2024-07-13 08:03:12.811907] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.253 [2024-07-13 08:03:12.811942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.253 [2024-07-13 08:03:12.811985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.253 [2024-07-13 08:03:12.816125] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.253 [2024-07-13 08:03:12.816160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.253 [2024-07-13 08:03:12.816188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.253 [2024-07-13 08:03:12.820473] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.253 [2024-07-13 08:03:12.820510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.253 [2024-07-13 08:03:12.820539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.253 [2024-07-13 08:03:12.824692] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.253 [2024-07-13 08:03:12.824733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.253 [2024-07-13 08:03:12.824762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.253 [2024-07-13 08:03:12.828743] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.253 [2024-07-13 08:03:12.828810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.253 [2024-07-13 08:03:12.828841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.253 [2024-07-13 08:03:12.832861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.253 [2024-07-13 08:03:12.832896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.253 [2024-07-13 08:03:12.832925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.253 [2024-07-13 08:03:12.837258] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.253 [2024-07-13 08:03:12.837296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.253 [2024-07-13 08:03:12.837326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.253 [2024-07-13 08:03:12.841384] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.253 [2024-07-13 08:03:12.841420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.253 [2024-07-13 08:03:12.841448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.253 [2024-07-13 08:03:12.845561] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.253 [2024-07-13 08:03:12.845599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.253 [2024-07-13 08:03:12.845627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.253 [2024-07-13 08:03:12.849738] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.253 [2024-07-13 08:03:12.849803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.253 [2024-07-13 08:03:12.849835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.253 [2024-07-13 08:03:12.853997] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.253 [2024-07-13 08:03:12.854031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.253 [2024-07-13 08:03:12.854059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.253 [2024-07-13 08:03:12.858562] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.253 [2024-07-13 08:03:12.858615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.858644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.863060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.863095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.863124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.867322] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.867359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.867387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.871424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.871459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.871488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.876126] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.876182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.876197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.880629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.880680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.880708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.885219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.885259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.885275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.889879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.889928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.889959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.894666] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.894718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.894731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.899363] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.899405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.899420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.903830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.903894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.903924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.908319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.908369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.908400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.912860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.912922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.912937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.916963] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.916999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.917027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.921263] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.921298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.921327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.925339] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.925375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.925404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.929333] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.929368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.929396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.933421] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.933489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.933518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.937472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.937524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.937552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.941376] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.941411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.941439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.945426] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.945463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.945509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.949441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.949477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.949506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.953538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.953573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.953602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.957464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.957499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.957528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.961558] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.961594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.961623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.965628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.965664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.965693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.969672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.969707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.969736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.973703] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.973738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.973767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.977642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.977677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.977706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.981604] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.981640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.981669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.985861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.985896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.985925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.989827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.989871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.989900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.993771] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.993835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.993864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:12.997982] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:12.998017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:12.998046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:13.001957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:13.001990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:13.002019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:13.005954] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:13.005987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:13.006015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:13.010121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:13.010196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:13.010210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:13.014124] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:13.014201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:13.014215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:13.018001] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:13.018035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:13.018064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:13.022373] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:13.022414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:13.022443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:13.026547] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:13.026597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:13.026625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:13.030763] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:13.030822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:13.030852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:13.034713] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:13.034748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.254 [2024-07-13 08:03:13.034776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.254 [2024-07-13 08:03:13.038673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.254 [2024-07-13 08:03:13.038708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.255 [2024-07-13 08:03:13.038736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.255 [2024-07-13 08:03:13.042545] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.255 [2024-07-13 08:03:13.042579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.255 [2024-07-13 08:03:13.042607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.255 [2024-07-13 08:03:13.046472] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.255 [2024-07-13 08:03:13.046509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.255 [2024-07-13 08:03:13.046538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.255 [2024-07-13 08:03:13.050345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.255 [2024-07-13 08:03:13.050383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.255 [2024-07-13 08:03:13.050412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.255 [2024-07-13 08:03:13.054240] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.255 [2024-07-13 08:03:13.054278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.255 [2024-07-13 08:03:13.054307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.255 [2024-07-13 08:03:13.058112] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.255 [2024-07-13 08:03:13.058170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.255 [2024-07-13 08:03:13.058184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.255 [2024-07-13 08:03:13.061965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.255 [2024-07-13 08:03:13.061997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.255 [2024-07-13 08:03:13.062025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.255 [2024-07-13 08:03:13.066082] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.255 [2024-07-13 08:03:13.066117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.255 [2024-07-13 08:03:13.066136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.070256] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.070295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.070309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.074409] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.074447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.074476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.078303] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.078342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.078358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.082066] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.082098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.082126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.085932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.085964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.085992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.089852] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.089885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.089913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.093663] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.093697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.093726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.097556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.097590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.097619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.101501] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.101536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.101549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.105327] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.105362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.105390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.109228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.109262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.109290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.113246] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.113279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.113308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.117188] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.117222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.117250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.121054] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.121088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.121117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.124876] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.124910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.124938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.128683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.128717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.128745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.132612] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.132647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.132675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.136895] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.136930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.136960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.140900] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.140934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.140962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.144830] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.144864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.144892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.514 [2024-07-13 08:03:13.148823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.514 [2024-07-13 08:03:13.148888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.514 [2024-07-13 08:03:13.148903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.153151] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.153201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.153229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.157021] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.157055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.157083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.160898] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.160933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.160962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.164730] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.164764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.164836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.168691] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.168725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.168754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.172641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.172676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.172704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.176570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.176604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.176633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.180524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.180558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.180587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.184481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.184515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.184544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.188503] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.188538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.188566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.192408] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.192442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.192470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.196358] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.196392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.196421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.200281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.200315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.200343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.204268] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.204302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.204331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.208223] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.208257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.208285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.212121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.212154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.212183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.215977] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.216010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.216037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.219879] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.219911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.219939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.223761] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.223802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.223830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.227615] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.227649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.227677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.231518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.231552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.231580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.235484] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.235518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.235547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.239512] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.239546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.239575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.243454] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.243488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.243516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.515 [2024-07-13 08:03:13.247368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.515 [2024-07-13 08:03:13.247402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.515 [2024-07-13 08:03:13.247430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.251350] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.251386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.251415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.255290] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.255325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.255352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.259196] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.259230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.259257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.263197] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.263230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.263259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.267119] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.267152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.267180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.271010] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.271043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.271072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.274921] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.274953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.274981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.278686] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.278721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.278749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.282627] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.282661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.282689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.286606] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.286640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.286668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.290568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.290602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.290631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.294599] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.294633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.294661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.298566] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.298600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.298629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.302677] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.302711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.302739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.306569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.306603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.306631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.310432] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.310501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.310515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.314464] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.314542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.314570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.318412] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.318449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.318493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.322308] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.322344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.322358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.516 [2024-07-13 08:03:13.326367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.516 [2024-07-13 08:03:13.326407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.516 [2024-07-13 08:03:13.326423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.330641] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.330675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.330703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.334888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.334923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.334935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.338788] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.338850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.338880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.342752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.342812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.342842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.346669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.346703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.346732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.350602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.350635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.350663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.354542] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.354577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.354605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.358493] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.358542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.358570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.362550] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.362584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.362611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.366591] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.366624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.366652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.370476] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.370540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.370568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.374455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.374549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.374577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.378515] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.378579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.378607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.382423] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.382493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.382506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.386442] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.386510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.386538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.390374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.390411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.390441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.394368] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.394406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.394420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.398233] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.398268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.398297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.402171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.402208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.402239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.406449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.406502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.406516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.410822] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.410869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.410899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.414841] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.414885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.414913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.418823] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.418884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.418914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.422767] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.422838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.422853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.426706] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.426740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.426768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.430574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.430608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.430635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.434584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.434618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.434645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.438538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.438571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.438599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.442524] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.442564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.442579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.446543] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.446577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.446605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.450389] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.450426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.450455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.454271] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.454306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.454335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.458164] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.458201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.458231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.462023] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.462055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.462083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.466024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.466059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.466103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.470433] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.470502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.470532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.474825] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.474881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.474894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.478965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.478998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.479026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.483060] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.483093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.483120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.487362] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.487398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.487428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.491635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.491684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.777 [2024-07-13 08:03:13.491713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.777 [2024-07-13 08:03:13.495951] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.777 [2024-07-13 08:03:13.495985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.496013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.500162] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.500216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.500246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.504518] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.504568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.504597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.509135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.509206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.509220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.513482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.513546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.513574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.517810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.517869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.517898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.522304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.522344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.522358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.526727] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.526762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.526818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.530975] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.531008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.531036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.535230] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.535265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.535293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.539441] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.539476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.539520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.543567] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.543601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.543630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.547554] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.547588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.547616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.551635] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.551669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.551698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.555573] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.555607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.555635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.559896] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.559931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.559959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.564424] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.564464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.564479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.569317] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.569354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.569384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.574266] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.574306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.574320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.578873] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.578907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.578936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.583455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.583523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.583566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:07.778 [2024-07-13 08:03:13.588277] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:07.778 [2024-07-13 08:03:13.588317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:07.778 [2024-07-13 08:03:13.588332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.592871] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.038 [2024-07-13 08:03:13.592915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.038 [2024-07-13 08:03:13.592943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.597912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.038 [2024-07-13 08:03:13.597993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.038 [2024-07-13 08:03:13.598023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.602936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.038 [2024-07-13 08:03:13.603001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.038 [2024-07-13 08:03:13.603030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.607661] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.038 [2024-07-13 08:03:13.607695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.038 [2024-07-13 08:03:13.607723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.612033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.038 [2024-07-13 08:03:13.612067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.038 [2024-07-13 08:03:13.612096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.616286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.038 [2024-07-13 08:03:13.616322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.038 [2024-07-13 08:03:13.616350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.620568] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.038 [2024-07-13 08:03:13.620602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.038 [2024-07-13 08:03:13.620631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.624828] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.038 [2024-07-13 08:03:13.624892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.038 [2024-07-13 08:03:13.624922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.629087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.038 [2024-07-13 08:03:13.629138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.038 [2024-07-13 08:03:13.629168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.633673] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.038 [2024-07-13 08:03:13.633707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.038 [2024-07-13 08:03:13.633736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.638042] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.038 [2024-07-13 08:03:13.638075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.038 [2024-07-13 08:03:13.638103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.642367] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.038 [2024-07-13 08:03:13.642409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.038 [2024-07-13 08:03:13.642423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.646787] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.038 [2024-07-13 08:03:13.646848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.038 [2024-07-13 08:03:13.646877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.651076] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.038 [2024-07-13 08:03:13.651109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.038 [2024-07-13 08:03:13.651137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.655396] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.038 [2024-07-13 08:03:13.655433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.038 [2024-07-13 08:03:13.655463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.659708] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.038 [2024-07-13 08:03:13.659744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.038 [2024-07-13 08:03:13.659772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.038 [2024-07-13 08:03:13.663962] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.663995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.664024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.668343] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.668381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.668410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.672697] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.672733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.672761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.677136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.677219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.677249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.681696] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.681731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.681760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.686236] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.686275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.686290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.690777] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.690837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.690867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.695407] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.695448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.695462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.700028] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.700062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.700090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.704577] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.704614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.704643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.709121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.709174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.709190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.713827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.713892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.713938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.718664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.718700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.718729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.723033] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.723067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.723096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.727360] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.727397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.727427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.731683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.731718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.731746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.735890] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.735925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.735953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.739980] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.740016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.740046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.744017] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.744052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.744081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.748133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.748185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.748214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.752215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.752251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.752280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.756379] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.756415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.756444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.760481] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.760517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.760546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.764589] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.764624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.764653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.768810] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.768845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.768874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.772957] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.772995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.773024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.776984] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.777020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.777050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.781008] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.781044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.039 [2024-07-13 08:03:13.781072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.039 [2024-07-13 08:03:13.785003] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.039 [2024-07-13 08:03:13.785039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.785068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.040 [2024-07-13 08:03:13.789029] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.040 [2024-07-13 08:03:13.789063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.789092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.040 [2024-07-13 08:03:13.793081] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.040 [2024-07-13 08:03:13.793116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.793161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.040 [2024-07-13 08:03:13.797156] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.040 [2024-07-13 08:03:13.797207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.797236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.040 [2024-07-13 08:03:13.801180] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.040 [2024-07-13 08:03:13.801231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.801260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.040 [2024-07-13 08:03:13.805466] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.040 [2024-07-13 08:03:13.805505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.805535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.040 [2024-07-13 08:03:13.809888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.040 [2024-07-13 08:03:13.809924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.809954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.040 [2024-07-13 08:03:13.813936] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.040 [2024-07-13 08:03:13.813971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.813999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.040 [2024-07-13 08:03:13.817940] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.040 [2024-07-13 08:03:13.817974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.818002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.040 [2024-07-13 08:03:13.822184] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.040 [2024-07-13 08:03:13.822224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.822239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.040 [2024-07-13 08:03:13.826286] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.040 [2024-07-13 08:03:13.826323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.826353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.040 [2024-07-13 08:03:13.830584] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.040 [2024-07-13 08:03:13.830619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.830647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.040 [2024-07-13 08:03:13.834654] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.040 [2024-07-13 08:03:13.834688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.834716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.040 [2024-07-13 08:03:13.838669] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.040 [2024-07-13 08:03:13.838703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.838731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.040 [2024-07-13 08:03:13.842678] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.040 [2024-07-13 08:03:13.842713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.842742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.040 [2024-07-13 08:03:13.846671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.040 [2024-07-13 08:03:13.846706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.846734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.040 [2024-07-13 08:03:13.850993] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.040 [2024-07-13 08:03:13.851026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.040 [2024-07-13 08:03:13.851055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.300 [2024-07-13 08:03:13.855121] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.300 [2024-07-13 08:03:13.855154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.300 [2024-07-13 08:03:13.855183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.300 [2024-07-13 08:03:13.859275] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.300 [2024-07-13 08:03:13.859309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.300 [2024-07-13 08:03:13.859337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.300 [2024-07-13 08:03:13.863305] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.300 [2024-07-13 08:03:13.863341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.300 [2024-07-13 08:03:13.863354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.300 [2024-07-13 08:03:13.867319] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.300 [2024-07-13 08:03:13.867353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.867381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.871438] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.871473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.871501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.875576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.875613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.875642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.879655] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.879690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.879718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.883737] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.883799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.883829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.887864] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.887898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.887927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.891919] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.891953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.891981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.895953] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.895986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.896014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.899888] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.899922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.899949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.903827] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.903860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.903887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.907752] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.907813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.907841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.911633] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.911668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.911696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.915672] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.915707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.915735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.919894] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.919928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.919957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.924228] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.924264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.924293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.928513] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.928564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.928593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.933024] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.933061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.933090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.937510] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.937546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.937574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.941796] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.941880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.941910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.946086] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.946124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.946179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.950602] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.950638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.950666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.955022] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.955063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.955093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.959345] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.959383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.959412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.963766] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.963844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.963860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.968122] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.968158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.968187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.972538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.972597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.972626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.976965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.977018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.977031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.981539] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.981576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.981605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.986070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.301 [2024-07-13 08:03:13.986108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.301 [2024-07-13 08:03:13.986148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.301 [2024-07-13 08:03:13.990540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:13.990606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:13.990635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:13.994990] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:13.995027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:13.995056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:13.999388] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:13.999426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:13.999454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.003619] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.003656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.003685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.007685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.007723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.007752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.011945] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.011984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.012013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.016087] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.016121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.016166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.020173] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.020209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.020237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.024324] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.024363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.024393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.028750] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.028816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.028847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.033243] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.033283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.033314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.037478] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.037533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.037577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.042045] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.042082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.042111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.046428] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.046473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.046488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.050808] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.050870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.050900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.055098] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.055151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.055182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.059453] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.059493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.059538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.063704] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.063803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.063835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.067912] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.067949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.067978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.072171] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.072209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.072238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.076455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.076494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.076523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.080629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.080664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.080693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.085149] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.085201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.085229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.089437] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.089474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.089503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.093595] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.093634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.093662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.097718] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.097755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.097800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.101844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.101881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.101909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.106101] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.106178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.106209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.302 [2024-07-13 08:03:14.110247] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.302 [2024-07-13 08:03:14.110284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.302 [2024-07-13 08:03:14.110329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.562 [2024-07-13 08:03:14.114901] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.562 [2024-07-13 08:03:14.114940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.562 [2024-07-13 08:03:14.114953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.562 [2024-07-13 08:03:14.119245] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.562 [2024-07-13 08:03:14.119281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.562 [2024-07-13 08:03:14.119309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.562 [2024-07-13 08:03:14.123725] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.562 [2024-07-13 08:03:14.123760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.562 [2024-07-13 08:03:14.123816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.562 [2024-07-13 08:03:14.127891] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.562 [2024-07-13 08:03:14.127924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.562 [2024-07-13 08:03:14.127953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.562 [2024-07-13 08:03:14.132072] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.562 [2024-07-13 08:03:14.132108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.562 [2024-07-13 08:03:14.132136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.562 [2024-07-13 08:03:14.136071] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.562 [2024-07-13 08:03:14.136106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.562 [2024-07-13 08:03:14.136135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.562 [2024-07-13 08:03:14.140183] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.562 [2024-07-13 08:03:14.140218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.562 [2024-07-13 08:03:14.140246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.562 [2024-07-13 08:03:14.144378] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.562 [2024-07-13 08:03:14.144431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.562 [2024-07-13 08:03:14.144477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.562 [2024-07-13 08:03:14.148609] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.562 [2024-07-13 08:03:14.148644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.562 [2024-07-13 08:03:14.148673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.562 [2024-07-13 08:03:14.152688] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.562 [2024-07-13 08:03:14.152724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.562 [2024-07-13 08:03:14.152753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.562 [2024-07-13 08:03:14.156724] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.562 [2024-07-13 08:03:14.156758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.562 [2024-07-13 08:03:14.156795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.562 [2024-07-13 08:03:14.160860] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.160894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.160922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.164960] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.164994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.165022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.169049] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.169083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.169111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.172986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.173020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.173048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.177067] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.177102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.177130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.180967] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.181001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.181029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.185302] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.185339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.185369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.189482] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.189518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.189548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.193632] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.193667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.193696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.197491] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.197526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.197569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.201477] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.201513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.201541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.205406] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.205441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.205470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.209304] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.209340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.209368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.213215] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.213250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.213278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.217136] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.217187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.217215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.221002] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.221036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.221064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.224950] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.224984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.225012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.228861] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.228895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.228923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.232978] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.233026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.233054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.237270] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.237304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.237332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.241133] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.241185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.241213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.245108] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.245158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.245186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.248981] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.249016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.249044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.252906] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.252939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.252967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.256844] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.256877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.256905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.260732] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.260766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.260823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.264715] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.264750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.264778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.268671] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.268706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.268734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.563 [2024-07-13 08:03:14.272634] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.563 [2024-07-13 08:03:14.272668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.563 [2024-07-13 08:03:14.272696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.276556] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.276591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.276620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.280492] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.280526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.280553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.284538] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.284573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.284601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.288489] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.288524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.288552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.292569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.292604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.292632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.296626] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.296660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.296689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.300628] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.300663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.300691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.304652] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.304687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.304716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.308617] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.308651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.308679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.312624] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.312659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.312688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.316630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.316665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.316693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.320570] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.320604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.320632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.324508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.324542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.324571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.328449] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.328483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.328511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.332508] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.332542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.332570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.336574] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.336608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.336637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.340576] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.340610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.340639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.344643] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.344677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.344706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.348630] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.348665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.348693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.352636] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.352670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.352698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.356548] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.356582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.356611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.360540] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.360574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.360603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.364505] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.364539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.364568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.368607] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.368640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.368669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.564 [2024-07-13 08:03:14.372755] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.564 [2024-07-13 08:03:14.372832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.564 [2024-07-13 08:03:14.372861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.824 [2024-07-13 08:03:14.377163] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.824 [2024-07-13 08:03:14.377199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.824 [2024-07-13 08:03:14.377211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.824 [2024-07-13 08:03:14.380973] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.824 [2024-07-13 08:03:14.381007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.824 [2024-07-13 08:03:14.381036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.824 [2024-07-13 08:03:14.385085] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.824 [2024-07-13 08:03:14.385119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.824 [2024-07-13 08:03:14.385164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.824 [2024-07-13 08:03:14.389018] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.824 [2024-07-13 08:03:14.389051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.824 [2024-07-13 08:03:14.389079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.824 [2024-07-13 08:03:14.392956] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.824 [2024-07-13 08:03:14.393004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.824 [2024-07-13 08:03:14.393032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.824 [2024-07-13 08:03:14.397031] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.824 [2024-07-13 08:03:14.397066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.824 [2024-07-13 08:03:14.397094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.824 [2024-07-13 08:03:14.400932] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.824 [2024-07-13 08:03:14.400966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.824 [2024-07-13 08:03:14.400995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.824 [2024-07-13 08:03:14.404811] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.824 [2024-07-13 08:03:14.404844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.824 [2024-07-13 08:03:14.404872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.824 [2024-07-13 08:03:14.408642] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.824 [2024-07-13 08:03:14.408676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.824 [2024-07-13 08:03:14.408705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.824 [2024-07-13 08:03:14.412656] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.824 [2024-07-13 08:03:14.412692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.824 [2024-07-13 08:03:14.412720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.824 [2024-07-13 08:03:14.416629] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.824 [2024-07-13 08:03:14.416663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.824 [2024-07-13 08:03:14.416691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.824 [2024-07-13 08:03:14.420569] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.824 [2024-07-13 08:03:14.420604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.824 [2024-07-13 08:03:14.420632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.824 [2024-07-13 08:03:14.424521] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.824 [2024-07-13 08:03:14.424556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.824 [2024-07-13 08:03:14.424584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.824 [2024-07-13 08:03:14.428455] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.428489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.428517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.432425] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.432460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.432488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.436439] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.436474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.436503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.440400] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.440435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.440463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.444374] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.444409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.444438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.448320] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.448354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.448383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.452313] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.452348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.452376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.456235] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.456269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.456297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.460229] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.460263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.460292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.464165] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.464199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.464227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.468059] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.468092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.468120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.472009] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.472042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.472070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.475965] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.475999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.476027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.479999] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.480050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.480079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.484234] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.484270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.484299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.488647] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.488684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.488713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.492986] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.493022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.493035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.497219] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.497254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.497283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.501301] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.501338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.501367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.505281] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.505316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.505344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.509152] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.509186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.509214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.513353] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.513391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.513420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.517683] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.825 [2024-07-13 08:03:14.517720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.825 [2024-07-13 08:03:14.517749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.825 [2024-07-13 08:03:14.522070] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.826 [2024-07-13 08:03:14.522107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.826 [2024-07-13 08:03:14.522160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.826 [2024-07-13 08:03:14.526685] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.826 [2024-07-13 08:03:14.526722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.826 [2024-07-13 08:03:14.526752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.826 [2024-07-13 08:03:14.531201] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.826 [2024-07-13 08:03:14.531240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.826 [2024-07-13 08:03:14.531271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.826 [2024-07-13 08:03:14.535664] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.826 [2024-07-13 08:03:14.535700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.826 [2024-07-13 08:03:14.535728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.826 [2024-07-13 08:03:14.539904] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.826 [2024-07-13 08:03:14.539938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.826 [2024-07-13 08:03:14.539967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.826 [2024-07-13 08:03:14.544135] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.826 [2024-07-13 08:03:14.544186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.826 [2024-07-13 08:03:14.544215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.826 [2024-07-13 08:03:14.548377] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.826 [2024-07-13 08:03:14.548413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.826 [2024-07-13 08:03:14.548442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.826 [2024-07-13 08:03:14.552465] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.826 [2024-07-13 08:03:14.552500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.826 [2024-07-13 08:03:14.552529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:08.826 [2024-07-13 08:03:14.556587] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.826 [2024-07-13 08:03:14.556622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.826 [2024-07-13 08:03:14.556650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:08.826 [2024-07-13 08:03:14.560616] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.826 [2024-07-13 08:03:14.560651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.826 [2024-07-13 08:03:14.560680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:08.826 [2024-07-13 08:03:14.564560] nvme_tcp.c:1391:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1e05420) 00:17:08.826 [2024-07-13 08:03:14.564595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:08.826 [2024-07-13 08:03:14.564624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.826 00:17:08.826 Latency(us) 00:17:08.826 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:08.826 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:17:08.826 nvme0n1 : 2.00 7453.37 931.67 0.00 0.00 2143.58 1668.19 10724.07 00:17:08.826 =================================================================================================================== 00:17:08.826 Total : 7453.37 931.67 0.00 0.00 2143.58 1668.19 10724.07 00:17:08.826 0 00:17:08.826 08:03:14 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:08.826 08:03:14 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:08.826 08:03:14 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:08.826 | .driver_specific 00:17:08.826 | .nvme_error 00:17:08.826 | .status_code 00:17:08.826 | .command_transient_transport_error' 00:17:08.826 08:03:14 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:09.085 08:03:14 -- host/digest.sh@71 -- # (( 481 > 0 )) 00:17:09.085 08:03:14 -- host/digest.sh@73 -- # killprocess 79371 00:17:09.085 08:03:14 -- common/autotest_common.sh@926 -- # '[' -z 79371 ']' 00:17:09.085 08:03:14 -- common/autotest_common.sh@930 -- # kill -0 79371 00:17:09.085 08:03:14 -- common/autotest_common.sh@931 -- # uname 00:17:09.085 08:03:14 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:09.085 08:03:14 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79371 00:17:09.085 killing process with pid 79371 00:17:09.085 Received shutdown signal, test time was about 2.000000 seconds 00:17:09.085 00:17:09.085 Latency(us) 00:17:09.085 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.085 =================================================================================================================== 00:17:09.085 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:09.085 08:03:14 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:09.085 08:03:14 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:09.085 08:03:14 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79371' 00:17:09.085 08:03:14 -- common/autotest_common.sh@945 -- # kill 79371 00:17:09.085 08:03:14 -- common/autotest_common.sh@950 -- # wait 79371 00:17:09.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:09.343 08:03:14 -- host/digest.sh@113 -- # run_bperf_err randwrite 4096 128 00:17:09.343 08:03:14 -- host/digest.sh@54 -- # local rw bs qd 00:17:09.343 08:03:14 -- host/digest.sh@56 -- # rw=randwrite 00:17:09.343 08:03:14 -- host/digest.sh@56 -- # bs=4096 00:17:09.343 08:03:14 -- host/digest.sh@56 -- # qd=128 00:17:09.343 08:03:14 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:17:09.343 08:03:14 -- host/digest.sh@58 -- # bperfpid=79402 00:17:09.343 08:03:14 -- host/digest.sh@60 -- # waitforlisten 79402 /var/tmp/bperf.sock 00:17:09.343 08:03:14 -- common/autotest_common.sh@819 -- # '[' -z 79402 ']' 00:17:09.343 08:03:14 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:09.343 08:03:14 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:09.343 08:03:14 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:09.343 08:03:14 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:09.343 08:03:14 -- common/autotest_common.sh@10 -- # set +x 00:17:09.343 [2024-07-13 08:03:15.013705] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:09.343 [2024-07-13 08:03:15.013988] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79402 ] 00:17:09.343 [2024-07-13 08:03:15.146896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:09.601 [2024-07-13 08:03:15.179941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:09.601 08:03:15 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:09.601 08:03:15 -- common/autotest_common.sh@852 -- # return 0 00:17:09.601 08:03:15 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:09.601 08:03:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:09.860 08:03:15 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:09.860 08:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:09.860 08:03:15 -- common/autotest_common.sh@10 -- # set +x 00:17:09.860 08:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:09.860 08:03:15 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:09.860 08:03:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:10.119 nvme0n1 00:17:10.119 08:03:15 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:17:10.119 08:03:15 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:10.119 08:03:15 -- common/autotest_common.sh@10 -- # set +x 00:17:10.119 08:03:15 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:10.119 08:03:15 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:10.119 08:03:15 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:10.119 Running I/O for 2 seconds... 00:17:10.119 [2024-07-13 08:03:15.910128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190ddc00 00:17:10.119 [2024-07-13 08:03:15.911546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.119 [2024-07-13 08:03:15.911585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:10.119 [2024-07-13 08:03:15.924916] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fef90 00:17:10.119 [2024-07-13 08:03:15.926285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.119 [2024-07-13 08:03:15.926321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:15.940212] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190ff3c8 00:17:10.378 [2024-07-13 08:03:15.941565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:15.941613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:15.954660] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190feb58 00:17:10.378 [2024-07-13 08:03:15.956056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:15.956089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:15.969140] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fe720 00:17:10.378 [2024-07-13 08:03:15.970521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:15.970568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:15.983506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fe2e8 00:17:10.378 [2024-07-13 08:03:15.984733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:6778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:15.984765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:15.999556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fdeb0 00:17:10.378 [2024-07-13 08:03:16.000933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:16.000961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:16.014986] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fda78 00:17:10.378 [2024-07-13 08:03:16.016150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:16.016180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:16.029469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fd640 00:17:10.378 [2024-07-13 08:03:16.030844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:16.030869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:16.044827] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fd208 00:17:10.378 [2024-07-13 08:03:16.046190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:16.046227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:16.060028] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fcdd0 00:17:10.378 [2024-07-13 08:03:16.061303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:16.061336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:16.074676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fc998 00:17:10.378 [2024-07-13 08:03:16.075963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:16.075995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:16.089057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fc560 00:17:10.378 [2024-07-13 08:03:16.090322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:16.090359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:16.103959] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fc128 00:17:10.378 [2024-07-13 08:03:16.105099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:21109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:16.105131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:16.118127] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fbcf0 00:17:10.378 [2024-07-13 08:03:16.119430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:4530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:16.119462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:16.132429] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fb8b8 00:17:10.378 [2024-07-13 08:03:16.133570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:16.133601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:16.146614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fb480 00:17:10.378 [2024-07-13 08:03:16.147885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:16.147945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:16.161013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fb048 00:17:10.378 [2024-07-13 08:03:16.162138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:25275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:16.162220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:16.175247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fac10 00:17:10.378 [2024-07-13 08:03:16.176390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:16.176422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:17:10.378 [2024-07-13 08:03:16.189510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fa7d8 00:17:10.378 [2024-07-13 08:03:16.190815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.378 [2024-07-13 08:03:16.190872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:17:10.637 [2024-07-13 08:03:16.204302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190fa3a0 00:17:10.637 [2024-07-13 08:03:16.205411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.637 [2024-07-13 08:03:16.205458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:17:10.637 [2024-07-13 08:03:16.218563] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f9f68 00:17:10.637 [2024-07-13 08:03:16.219662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.637 [2024-07-13 08:03:16.219722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:17:10.637 [2024-07-13 08:03:16.232661] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f9b30 00:17:10.637 [2024-07-13 08:03:16.233729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.637 [2024-07-13 08:03:16.233763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:17:10.637 [2024-07-13 08:03:16.246819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f96f8 00:17:10.637 [2024-07-13 08:03:16.248015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.637 [2024-07-13 08:03:16.248047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:17:10.637 [2024-07-13 08:03:16.261377] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f92c0 00:17:10.637 [2024-07-13 08:03:16.262567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.637 [2024-07-13 08:03:16.262603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:17:10.637 [2024-07-13 08:03:16.277673] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f8e88 00:17:10.637 [2024-07-13 08:03:16.278931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:21361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.637 [2024-07-13 08:03:16.278965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:17:10.637 [2024-07-13 08:03:16.293578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f8a50 00:17:10.637 [2024-07-13 08:03:16.294754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19795 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.637 [2024-07-13 08:03:16.294794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:17:10.637 [2024-07-13 08:03:16.310034] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f8618 00:17:10.637 [2024-07-13 08:03:16.311137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:2121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.637 [2024-07-13 08:03:16.311203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:17:10.637 [2024-07-13 08:03:16.325699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f81e0 00:17:10.637 [2024-07-13 08:03:16.326860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.637 [2024-07-13 08:03:16.326891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:17:10.637 [2024-07-13 08:03:16.340626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f7da8 00:17:10.637 [2024-07-13 08:03:16.341744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.638 [2024-07-13 08:03:16.341801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:17:10.638 [2024-07-13 08:03:16.357487] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f7970 00:17:10.638 [2024-07-13 08:03:16.358662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.638 [2024-07-13 08:03:16.358696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:17:10.638 [2024-07-13 08:03:16.373707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f7538 00:17:10.638 [2024-07-13 08:03:16.374872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.638 [2024-07-13 08:03:16.374907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:17:10.638 [2024-07-13 08:03:16.389490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f7100 00:17:10.638 [2024-07-13 08:03:16.390584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.638 [2024-07-13 08:03:16.390620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:10.638 [2024-07-13 08:03:16.404804] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f6cc8 00:17:10.638 [2024-07-13 08:03:16.405861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:21443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.638 [2024-07-13 08:03:16.405895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:17:10.638 [2024-07-13 08:03:16.420458] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f6890 00:17:10.638 [2024-07-13 08:03:16.421579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:8445 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.638 [2024-07-13 08:03:16.421609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:17:10.638 [2024-07-13 08:03:16.435447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f6458 00:17:10.638 [2024-07-13 08:03:16.436466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:19155 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.638 [2024-07-13 08:03:16.436496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:17:10.638 [2024-07-13 08:03:16.450998] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f6020 00:17:10.638 [2024-07-13 08:03:16.452025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8273 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.638 [2024-07-13 08:03:16.452070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.466416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f5be8 00:17:10.897 [2024-07-13 08:03:16.467471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:24016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.467500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.481464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f57b0 00:17:10.897 [2024-07-13 08:03:16.482560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.482591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.496417] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f5378 00:17:10.897 [2024-07-13 08:03:16.497432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.497463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.511464] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f4f40 00:17:10.897 [2024-07-13 08:03:16.512399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:10324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.512444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.526018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f4b08 00:17:10.897 [2024-07-13 08:03:16.527042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.527071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.541728] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f46d0 00:17:10.897 [2024-07-13 08:03:16.542718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.542747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.556701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f4298 00:17:10.897 [2024-07-13 08:03:16.557636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:17488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.557697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.572698] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f3e60 00:17:10.897 [2024-07-13 08:03:16.573693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.573723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.589075] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f3a28 00:17:10.897 [2024-07-13 08:03:16.590017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.590075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.604490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f35f0 00:17:10.897 [2024-07-13 08:03:16.605416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.605460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.619841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f31b8 00:17:10.897 [2024-07-13 08:03:16.620697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.620726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.635325] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f2d80 00:17:10.897 [2024-07-13 08:03:16.636272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.636318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.650561] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f2948 00:17:10.897 [2024-07-13 08:03:16.651454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:2408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.651478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.664840] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f2510 00:17:10.897 [2024-07-13 08:03:16.665628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:25496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.665652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.679013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f20d8 00:17:10.897 [2024-07-13 08:03:16.679775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.679822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.693230] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f1ca0 00:17:10.897 [2024-07-13 08:03:16.694063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:18015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.694091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:17:10.897 [2024-07-13 08:03:16.707306] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f1868 00:17:10.897 [2024-07-13 08:03:16.708234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:14171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:10.897 [2024-07-13 08:03:16.708265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:17:11.156 [2024-07-13 08:03:16.722294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f1430 00:17:11.157 [2024-07-13 08:03:16.723152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.723181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:17:11.157 [2024-07-13 08:03:16.736444] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f0ff8 00:17:11.157 [2024-07-13 08:03:16.737227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.737266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:17:11.157 [2024-07-13 08:03:16.750479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f0bc0 00:17:11.157 [2024-07-13 08:03:16.751223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.751248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:17:11.157 [2024-07-13 08:03:16.764621] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f0788 00:17:11.157 [2024-07-13 08:03:16.765382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:14676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.765407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:17:11.157 [2024-07-13 08:03:16.778837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190f0350 00:17:11.157 [2024-07-13 08:03:16.779542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.779566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:17:11.157 [2024-07-13 08:03:16.792890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190eff18 00:17:11.157 [2024-07-13 08:03:16.793579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:3408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.793604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:17:11.157 [2024-07-13 08:03:16.807000] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190efae0 00:17:11.157 [2024-07-13 08:03:16.807679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.807704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:17:11.157 [2024-07-13 08:03:16.821346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190ef6a8 00:17:11.157 [2024-07-13 08:03:16.822053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.822079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:17:11.157 [2024-07-13 08:03:16.835574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190ef270 00:17:11.157 [2024-07-13 08:03:16.836311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.836336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:17:11.157 [2024-07-13 08:03:16.849809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190eee38 00:17:11.157 [2024-07-13 08:03:16.850543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.850583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:17:11.157 [2024-07-13 08:03:16.863981] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190eea00 00:17:11.157 [2024-07-13 08:03:16.864698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.864722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:11.157 [2024-07-13 08:03:16.879641] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190ee5c8 00:17:11.157 [2024-07-13 08:03:16.880411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:19979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.880438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:17:11.157 [2024-07-13 08:03:16.896990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190ee190 00:17:11.157 [2024-07-13 08:03:16.897648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.897679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:17:11.157 [2024-07-13 08:03:16.911302] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190edd58 00:17:11.157 [2024-07-13 08:03:16.911902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.911926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:17:11.157 [2024-07-13 08:03:16.925626] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190ed920 00:17:11.157 [2024-07-13 08:03:16.926312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.926339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:17:11.157 [2024-07-13 08:03:16.940383] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190ed4e8 00:17:11.157 [2024-07-13 08:03:16.941135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:1020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.941178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:17:11.157 [2024-07-13 08:03:16.955858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190ed0b0 00:17:11.157 [2024-07-13 08:03:16.956501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:8993 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.157 [2024-07-13 08:03:16.956541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:17:11.416 [2024-07-13 08:03:16.971514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190ecc78 00:17:11.416 [2024-07-13 08:03:16.972211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.416 [2024-07-13 08:03:16.972238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:17:11.416 [2024-07-13 08:03:16.986129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190ec840 00:17:11.416 [2024-07-13 08:03:16.986843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:14084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:16.986876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:17:11.417 [2024-07-13 08:03:17.000436] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190ec408 00:17:11.417 [2024-07-13 08:03:17.001019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:17498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:17.001045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:17:11.417 [2024-07-13 08:03:17.014730] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190ebfd0 00:17:11.417 [2024-07-13 08:03:17.015303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:12874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:17.015329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:17:11.417 [2024-07-13 08:03:17.028952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190ebb98 00:17:11.417 [2024-07-13 08:03:17.029525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:11178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:17.029564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:17:11.417 [2024-07-13 08:03:17.043708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190eb760 00:17:11.417 [2024-07-13 08:03:17.044263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:17.044289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:17:11.417 [2024-07-13 08:03:17.057970] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190eb328 00:17:11.417 [2024-07-13 08:03:17.058577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:17.058601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:17:11.417 [2024-07-13 08:03:17.072269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190eaef0 00:17:11.417 [2024-07-13 08:03:17.072813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:1509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:17.072847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:17:11.417 [2024-07-13 08:03:17.086801] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190eaab8 00:17:11.417 [2024-07-13 08:03:17.087318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:10140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:17.087343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:17:11.417 [2024-07-13 08:03:17.101030] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190ea680 00:17:11.417 [2024-07-13 08:03:17.101580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:9109 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:17.101600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:17:11.417 [2024-07-13 08:03:17.115343] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190ea248 00:17:11.417 [2024-07-13 08:03:17.115861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:16396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:17.115893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:17:11.417 [2024-07-13 08:03:17.130038] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e9e10 00:17:11.417 [2024-07-13 08:03:17.130682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:17.130708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:11.417 [2024-07-13 08:03:17.145649] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e99d8 00:17:11.417 [2024-07-13 08:03:17.146180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:17.146206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:11.417 [2024-07-13 08:03:17.160722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e95a0 00:17:11.417 [2024-07-13 08:03:17.161239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:9493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:17.161264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:11.417 [2024-07-13 08:03:17.175078] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e9168 00:17:11.417 [2024-07-13 08:03:17.175516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:10772 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:17.175543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:11.417 [2024-07-13 08:03:17.189371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e8d30 00:17:11.417 [2024-07-13 08:03:17.189924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:17.189957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:17:11.417 [2024-07-13 08:03:17.204772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e88f8 00:17:11.417 [2024-07-13 08:03:17.205290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:4005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:17.205316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:17:11.417 [2024-07-13 08:03:17.219366] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e84c0 00:17:11.417 [2024-07-13 08:03:17.219764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:10971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.417 [2024-07-13 08:03:17.219792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:17:11.676 [2024-07-13 08:03:17.234427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e8088 00:17:11.676 [2024-07-13 08:03:17.234976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:4077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.676 [2024-07-13 08:03:17.235018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:11.676 [2024-07-13 08:03:17.250293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e7c50 00:17:11.676 [2024-07-13 08:03:17.250772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.676 [2024-07-13 08:03:17.250804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:11.676 [2024-07-13 08:03:17.265193] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e7818 00:17:11.676 [2024-07-13 08:03:17.265589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:21438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.676 [2024-07-13 08:03:17.265628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:11.676 [2024-07-13 08:03:17.279427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e73e0 00:17:11.676 [2024-07-13 08:03:17.279800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.676 [2024-07-13 08:03:17.279824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:11.676 [2024-07-13 08:03:17.293543] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e6fa8 00:17:11.676 [2024-07-13 08:03:17.293942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.676 [2024-07-13 08:03:17.293966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:11.676 [2024-07-13 08:03:17.307915] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e6b70 00:17:11.676 [2024-07-13 08:03:17.308265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:5227 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.676 [2024-07-13 08:03:17.308289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:11.676 [2024-07-13 08:03:17.322286] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e6738 00:17:11.676 [2024-07-13 08:03:17.322687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.676 [2024-07-13 08:03:17.322710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:11.676 [2024-07-13 08:03:17.336385] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e6300 00:17:11.677 [2024-07-13 08:03:17.336713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:9280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.677 [2024-07-13 08:03:17.336737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:11.677 [2024-07-13 08:03:17.350551] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e5ec8 00:17:11.677 [2024-07-13 08:03:17.350900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.677 [2024-07-13 08:03:17.350924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:17:11.677 [2024-07-13 08:03:17.364749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e5a90 00:17:11.677 [2024-07-13 08:03:17.365073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.677 [2024-07-13 08:03:17.365097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:17:11.677 [2024-07-13 08:03:17.379095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e5658 00:17:11.677 [2024-07-13 08:03:17.379400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.677 [2024-07-13 08:03:17.379423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:17:11.677 [2024-07-13 08:03:17.394229] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e5220 00:17:11.677 [2024-07-13 08:03:17.394578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:19579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.677 [2024-07-13 08:03:17.394602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:17:11.677 [2024-07-13 08:03:17.408675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e4de8 00:17:11.677 [2024-07-13 08:03:17.408997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.677 [2024-07-13 08:03:17.409021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:17:11.677 [2024-07-13 08:03:17.424895] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e49b0 00:17:11.677 [2024-07-13 08:03:17.425271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:13958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.677 [2024-07-13 08:03:17.425296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:17:11.677 [2024-07-13 08:03:17.440647] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e4578 00:17:11.677 [2024-07-13 08:03:17.440975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.677 [2024-07-13 08:03:17.440997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:17:11.677 [2024-07-13 08:03:17.455550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e4140 00:17:11.677 [2024-07-13 08:03:17.455875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:18994 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.677 [2024-07-13 08:03:17.455896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:17:11.677 [2024-07-13 08:03:17.470565] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e3d08 00:17:11.677 [2024-07-13 08:03:17.470906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:14959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.677 [2024-07-13 08:03:17.470928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:17:11.677 [2024-07-13 08:03:17.486321] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e38d0 00:17:11.677 [2024-07-13 08:03:17.486676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.677 [2024-07-13 08:03:17.486696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:17:11.936 [2024-07-13 08:03:17.502906] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e3498 00:17:11.936 [2024-07-13 08:03:17.503165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.936 [2024-07-13 08:03:17.503191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:17:11.936 [2024-07-13 08:03:17.519018] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e3060 00:17:11.936 [2024-07-13 08:03:17.519295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.936 [2024-07-13 08:03:17.519322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:17:11.936 [2024-07-13 08:03:17.534813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e2c28 00:17:11.936 [2024-07-13 08:03:17.535060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.936 [2024-07-13 08:03:17.535095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:17:11.936 [2024-07-13 08:03:17.550572] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e27f0 00:17:11.936 [2024-07-13 08:03:17.550839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:24852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.936 [2024-07-13 08:03:17.550861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:17:11.936 [2024-07-13 08:03:17.565960] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e23b8 00:17:11.936 [2024-07-13 08:03:17.566224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.936 [2024-07-13 08:03:17.566250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:17:11.936 [2024-07-13 08:03:17.581999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e1f80 00:17:11.936 [2024-07-13 08:03:17.582238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.936 [2024-07-13 08:03:17.582260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:17:11.936 [2024-07-13 08:03:17.598411] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e1b48 00:17:11.936 [2024-07-13 08:03:17.598646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.936 [2024-07-13 08:03:17.598666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:17:11.936 [2024-07-13 08:03:17.613951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e1710 00:17:11.936 [2024-07-13 08:03:17.614160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.936 [2024-07-13 08:03:17.614182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:17:11.936 [2024-07-13 08:03:17.628932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e12d8 00:17:11.936 [2024-07-13 08:03:17.629114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:13195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.936 [2024-07-13 08:03:17.629134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:17:11.936 [2024-07-13 08:03:17.643864] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e0ea0 00:17:11.936 [2024-07-13 08:03:17.644041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:9321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.936 [2024-07-13 08:03:17.644061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:17:11.936 [2024-07-13 08:03:17.659431] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e0a68 00:17:11.936 [2024-07-13 08:03:17.659603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:17443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.936 [2024-07-13 08:03:17.659623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:17:11.936 [2024-07-13 08:03:17.673976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e0630 00:17:11.936 [2024-07-13 08:03:17.674120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.936 [2024-07-13 08:03:17.674165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:17:11.936 [2024-07-13 08:03:17.688057] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190e01f8 00:17:11.937 [2024-07-13 08:03:17.688209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.937 [2024-07-13 08:03:17.688229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:17:11.937 [2024-07-13 08:03:17.702236] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190dfdc0 00:17:11.937 [2024-07-13 08:03:17.702370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.937 [2024-07-13 08:03:17.702391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:17:11.937 [2024-07-13 08:03:17.716217] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190df988 00:17:11.937 [2024-07-13 08:03:17.716334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:17981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.937 [2024-07-13 08:03:17.716353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:17:11.937 [2024-07-13 08:03:17.731099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190df550 00:17:11.937 [2024-07-13 08:03:17.731244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:9865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.937 [2024-07-13 08:03:17.731265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:17:11.937 [2024-07-13 08:03:17.746248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190df118 00:17:11.937 [2024-07-13 08:03:17.746359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:11.937 [2024-07-13 08:03:17.746382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:17:12.196 [2024-07-13 08:03:17.761544] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190dece0 00:17:12.196 [2024-07-13 08:03:17.761640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:14657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.196 [2024-07-13 08:03:17.761660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:17:12.196 [2024-07-13 08:03:17.776882] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190de8a8 00:17:12.196 [2024-07-13 08:03:17.776966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.196 [2024-07-13 08:03:17.776986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:17:12.196 [2024-07-13 08:03:17.791582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190de038 00:17:12.196 [2024-07-13 08:03:17.791656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12610 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.196 [2024-07-13 08:03:17.791676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:17:12.196 [2024-07-13 08:03:17.812937] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190de038 00:17:12.196 [2024-07-13 08:03:17.814314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.196 [2024-07-13 08:03:17.814346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:12.196 [2024-07-13 08:03:17.827976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190de470 00:17:12.196 [2024-07-13 08:03:17.829252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:11398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.196 [2024-07-13 08:03:17.829295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:17:12.196 [2024-07-13 08:03:17.842451] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190de8a8 00:17:12.196 [2024-07-13 08:03:17.843763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:13824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.196 [2024-07-13 08:03:17.843842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:17:12.196 [2024-07-13 08:03:17.857329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190dece0 00:17:12.196 [2024-07-13 08:03:17.858672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.196 [2024-07-13 08:03:17.858714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:17:12.196 [2024-07-13 08:03:17.871841] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190df118 00:17:12.196 [2024-07-13 08:03:17.873089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.196 [2024-07-13 08:03:17.873133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:17:12.196 [2024-07-13 08:03:17.886269] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdc90) with pdu=0x2000190df550 00:17:12.196 [2024-07-13 08:03:17.887570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:2110 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:17:12.196 [2024-07-13 08:03:17.887613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:17:12.196 00:17:12.196 Latency(us) 00:17:12.196 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.196 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:12.196 nvme0n1 : 2.00 16995.69 66.39 0.00 0.00 7525.83 6672.76 22043.93 00:17:12.196 =================================================================================================================== 00:17:12.196 Total : 16995.69 66.39 0.00 0.00 7525.83 6672.76 22043.93 00:17:12.196 0 00:17:12.196 08:03:17 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:12.196 08:03:17 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:12.196 08:03:17 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:12.196 | .driver_specific 00:17:12.196 | .nvme_error 00:17:12.196 | .status_code 00:17:12.196 | .command_transient_transport_error' 00:17:12.196 08:03:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:12.455 08:03:18 -- host/digest.sh@71 -- # (( 133 > 0 )) 00:17:12.455 08:03:18 -- host/digest.sh@73 -- # killprocess 79402 00:17:12.455 08:03:18 -- common/autotest_common.sh@926 -- # '[' -z 79402 ']' 00:17:12.455 08:03:18 -- common/autotest_common.sh@930 -- # kill -0 79402 00:17:12.455 08:03:18 -- common/autotest_common.sh@931 -- # uname 00:17:12.455 08:03:18 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:12.455 08:03:18 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79402 00:17:12.455 08:03:18 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:12.455 killing process with pid 79402 00:17:12.455 08:03:18 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:12.455 08:03:18 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79402' 00:17:12.455 Received shutdown signal, test time was about 2.000000 seconds 00:17:12.455 00:17:12.455 Latency(us) 00:17:12.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:12.455 =================================================================================================================== 00:17:12.455 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:12.455 08:03:18 -- common/autotest_common.sh@945 -- # kill 79402 00:17:12.455 08:03:18 -- common/autotest_common.sh@950 -- # wait 79402 00:17:12.713 08:03:18 -- host/digest.sh@114 -- # run_bperf_err randwrite 131072 16 00:17:12.713 08:03:18 -- host/digest.sh@54 -- # local rw bs qd 00:17:12.713 08:03:18 -- host/digest.sh@56 -- # rw=randwrite 00:17:12.713 08:03:18 -- host/digest.sh@56 -- # bs=131072 00:17:12.713 08:03:18 -- host/digest.sh@56 -- # qd=16 00:17:12.713 08:03:18 -- host/digest.sh@58 -- # bperfpid=79433 00:17:12.713 08:03:18 -- host/digest.sh@60 -- # waitforlisten 79433 /var/tmp/bperf.sock 00:17:12.713 08:03:18 -- common/autotest_common.sh@819 -- # '[' -z 79433 ']' 00:17:12.713 08:03:18 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bperf.sock 00:17:12.713 08:03:18 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:17:12.713 08:03:18 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:12.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:17:12.713 08:03:18 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:17:12.713 08:03:18 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:12.713 08:03:18 -- common/autotest_common.sh@10 -- # set +x 00:17:12.713 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:12.713 Zero copy mechanism will not be used. 00:17:12.713 [2024-07-13 08:03:18.353511] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:12.713 [2024-07-13 08:03:18.353606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79433 ] 00:17:12.713 [2024-07-13 08:03:18.489751] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.713 [2024-07-13 08:03:18.524898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.647 08:03:19 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:13.647 08:03:19 -- common/autotest_common.sh@852 -- # return 0 00:17:13.647 08:03:19 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:13.647 08:03:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:17:13.905 08:03:19 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:17:13.905 08:03:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:13.905 08:03:19 -- common/autotest_common.sh@10 -- # set +x 00:17:13.905 08:03:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:13.905 08:03:19 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:13.905 08:03:19 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:17:14.163 nvme0n1 00:17:14.163 08:03:19 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:17:14.163 08:03:19 -- common/autotest_common.sh@551 -- # xtrace_disable 00:17:14.163 08:03:19 -- common/autotest_common.sh@10 -- # set +x 00:17:14.163 08:03:19 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:17:14.163 08:03:19 -- host/digest.sh@69 -- # bperf_py perform_tests 00:17:14.163 08:03:19 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:17:14.421 I/O size of 131072 is greater than zero copy threshold (65536). 00:17:14.421 Zero copy mechanism will not be used. 00:17:14.421 Running I/O for 2 seconds... 00:17:14.421 [2024-07-13 08:03:20.056128] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.421 [2024-07-13 08:03:20.056495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.421 [2024-07-13 08:03:20.056541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.421 [2024-07-13 08:03:20.061708] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.421 [2024-07-13 08:03:20.062052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.062084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.067300] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.067617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.067648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.072832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.073157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.073187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.078073] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.078403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.078433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.083525] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.083845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.083888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.088999] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.089316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.089345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.094445] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.094754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.094794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.099716] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.100046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.100079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.105137] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.105456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.105485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.110536] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.110860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.110889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.115932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.116339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.116379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.121312] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.121621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.121650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.126636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.126989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.127055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.132317] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.132643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.132673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.137677] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.137998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.138031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.143048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.143366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.143395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.148412] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.148768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.148805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.153850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.154189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.154219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.159289] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.159601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.159629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.164783] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.165118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.165147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.170081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.170408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.170437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.175442] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.175836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.175876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.180885] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.181201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.181230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.186189] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.186503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.186532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.191462] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.191776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.191816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.196709] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.197046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.197075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.201969] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.202294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.202323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.207156] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.422 [2024-07-13 08:03:20.207467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.422 [2024-07-13 08:03:20.207496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.422 [2024-07-13 08:03:20.212337] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.423 [2024-07-13 08:03:20.212648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.423 [2024-07-13 08:03:20.212678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.423 [2024-07-13 08:03:20.217606] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.423 [2024-07-13 08:03:20.217936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.423 [2024-07-13 08:03:20.217966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.423 [2024-07-13 08:03:20.223046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.423 [2024-07-13 08:03:20.223356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.423 [2024-07-13 08:03:20.223386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.423 [2024-07-13 08:03:20.228293] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.423 [2024-07-13 08:03:20.228610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.423 [2024-07-13 08:03:20.228639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.423 [2024-07-13 08:03:20.233720] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.423 [2024-07-13 08:03:20.234059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.423 [2024-07-13 08:03:20.234088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.682 [2024-07-13 08:03:20.239225] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.682 [2024-07-13 08:03:20.239535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.682 [2024-07-13 08:03:20.239566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.682 [2024-07-13 08:03:20.244757] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.245097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.245132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.250041] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.250369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.250398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.255283] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.255597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.255626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.260534] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.260957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.260987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.265990] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.266312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.266341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.271254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.271571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.271599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.276599] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.276906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.276949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.281789] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.282115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.282153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.286615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.286940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.286974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.291625] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.291948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.291987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.296506] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.296842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.296882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.301459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.301805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.301840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.306633] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.307035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.307078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.311934] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.312316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.312358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.316984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.317259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.317285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.321665] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.321967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.321993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.326533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.326826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.326862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.331369] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.331654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.331681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.336187] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.336467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.336492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.341048] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.341398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.341426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.346122] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.683 [2024-07-13 08:03:20.346468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.683 [2024-07-13 08:03:20.346497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.683 [2024-07-13 08:03:20.351394] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.351707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.351733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.356384] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.356697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.356723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.361423] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.361736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.361762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.366247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.366618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.366660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.371088] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.371385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.371427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.376093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.376396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.376422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.380992] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.381314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.381341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.386035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.386388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.386418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.391056] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.391334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.391360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.395813] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.396133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.396167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.400636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.400947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.400974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.405416] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.405713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.405740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.410081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.410419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.410478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.414928] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.415205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.415230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.419670] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.420022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.420053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.424474] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.424752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.424787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.429260] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.429549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.429575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.434031] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.434376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.434404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.438903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.439183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.439209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.443668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.444003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.444034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.448344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.448622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.448648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.453131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.453411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.453437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.457900] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.458225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.684 [2024-07-13 08:03:20.458253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.684 [2024-07-13 08:03:20.462907] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.684 [2024-07-13 08:03:20.463186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.685 [2024-07-13 08:03:20.463212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.685 [2024-07-13 08:03:20.467769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.685 [2024-07-13 08:03:20.468111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.685 [2024-07-13 08:03:20.468138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.685 [2024-07-13 08:03:20.472532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.685 [2024-07-13 08:03:20.472812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.685 [2024-07-13 08:03:20.472848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.685 [2024-07-13 08:03:20.477258] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.685 [2024-07-13 08:03:20.477559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.685 [2024-07-13 08:03:20.477601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.685 [2024-07-13 08:03:20.482016] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.685 [2024-07-13 08:03:20.482371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.685 [2024-07-13 08:03:20.482420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.685 [2024-07-13 08:03:20.487005] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.685 [2024-07-13 08:03:20.487287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.685 [2024-07-13 08:03:20.487313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.685 [2024-07-13 08:03:20.491758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.685 [2024-07-13 08:03:20.492130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.685 [2024-07-13 08:03:20.492165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.497334] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.497672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.497699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.502441] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.502830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.502868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.507514] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.507836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.507863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.512549] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.512884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.512911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.517488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.517788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.517823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.522387] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.522738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.522765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.527270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.527555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.527581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.532139] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.532435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.532462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.536849] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.537129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.537154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.541496] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.541774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.541810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.546309] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.546624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.546666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.551171] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.551449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.551474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.555844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.556165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.556199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.560648] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.560967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.560991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.565347] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.565626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.565652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.569973] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.570302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.570329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.574837] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.575129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.575155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.579539] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.579843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.946 [2024-07-13 08:03:20.579870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.946 [2024-07-13 08:03:20.584279] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.946 [2024-07-13 08:03:20.584569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.584595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.588904] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.589198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.589223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.593582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.593889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.593916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.598371] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.598669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.598695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.603117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.603420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.603446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.608262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.608604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.608631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.613490] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.613804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.613839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.618662] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.618982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.619010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.623787] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.624120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.624163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.629019] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.629355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.629384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.634311] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.634640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.634667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.639786] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.640189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.640229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.645072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.645403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.645431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.650211] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.650585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.650610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.655569] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.655863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.655898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.660812] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.661198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.661233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.665773] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.666093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.666120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.670659] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.670988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.671019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.675504] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.675781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.675816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.680249] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.680535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.680576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.685065] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.685345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.685370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.689754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.690081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.690124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.694845] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.947 [2024-07-13 08:03:20.695150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.947 [2024-07-13 08:03:20.695191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.947 [2024-07-13 08:03:20.700290] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.948 [2024-07-13 08:03:20.700627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.948 [2024-07-13 08:03:20.700670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.948 [2024-07-13 08:03:20.705345] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.948 [2024-07-13 08:03:20.705627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.948 [2024-07-13 08:03:20.705653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.948 [2024-07-13 08:03:20.710096] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.948 [2024-07-13 08:03:20.710447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.948 [2024-07-13 08:03:20.710489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.948 [2024-07-13 08:03:20.714976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.948 [2024-07-13 08:03:20.715256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.948 [2024-07-13 08:03:20.715281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.948 [2024-07-13 08:03:20.719996] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.948 [2024-07-13 08:03:20.720323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.948 [2024-07-13 08:03:20.720350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.948 [2024-07-13 08:03:20.724880] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.948 [2024-07-13 08:03:20.725174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.948 [2024-07-13 08:03:20.725199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.948 [2024-07-13 08:03:20.729612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.948 [2024-07-13 08:03:20.729959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.948 [2024-07-13 08:03:20.729990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.948 [2024-07-13 08:03:20.734938] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.948 [2024-07-13 08:03:20.735224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.948 [2024-07-13 08:03:20.735250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:14.948 [2024-07-13 08:03:20.739750] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.948 [2024-07-13 08:03:20.740065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.948 [2024-07-13 08:03:20.740092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:14.948 [2024-07-13 08:03:20.744616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.948 [2024-07-13 08:03:20.744924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.948 [2024-07-13 08:03:20.744951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:14.948 [2024-07-13 08:03:20.749379] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.948 [2024-07-13 08:03:20.749653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.948 [2024-07-13 08:03:20.749679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:14.948 [2024-07-13 08:03:20.754104] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:14.948 [2024-07-13 08:03:20.754460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:14.948 [2024-07-13 08:03:20.754516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.208 [2024-07-13 08:03:20.759488] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.208 [2024-07-13 08:03:20.759818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.208 [2024-07-13 08:03:20.759855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.208 [2024-07-13 08:03:20.764533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.208 [2024-07-13 08:03:20.764872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.208 [2024-07-13 08:03:20.764899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.208 [2024-07-13 08:03:20.769562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.208 [2024-07-13 08:03:20.769874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.208 [2024-07-13 08:03:20.769901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.208 [2024-07-13 08:03:20.774475] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.208 [2024-07-13 08:03:20.774817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.208 [2024-07-13 08:03:20.774854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.208 [2024-07-13 08:03:20.779346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.208 [2024-07-13 08:03:20.779626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.208 [2024-07-13 08:03:20.779652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.208 [2024-07-13 08:03:20.784167] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.208 [2024-07-13 08:03:20.784453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.208 [2024-07-13 08:03:20.784480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.208 [2024-07-13 08:03:20.789093] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.208 [2024-07-13 08:03:20.789376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.208 [2024-07-13 08:03:20.789401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.208 [2024-07-13 08:03:20.793887] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.208 [2024-07-13 08:03:20.794215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.208 [2024-07-13 08:03:20.794243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.208 [2024-07-13 08:03:20.798775] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.208 [2024-07-13 08:03:20.799136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.208 [2024-07-13 08:03:20.799180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.208 [2024-07-13 08:03:20.803612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.208 [2024-07-13 08:03:20.803926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.803953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.808479] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.808773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.808808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.813248] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.813549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.813576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.818060] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.818406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.818434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.822976] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.823276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.823302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.827832] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.828114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.828156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.832733] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.833059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.833117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.837516] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.837806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.837831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.842690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.843031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.843063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.847644] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.847929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.847954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.852819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.853230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.853258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.858226] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.858624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.858650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.863754] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.864162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.864191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.868953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.869249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.869277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.873921] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.874234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.874279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.878903] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.879214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.879242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.884017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.884320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.884347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.889353] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.889675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.889703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.894550] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.894917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.894945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.899822] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.900178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.900212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.904952] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.905248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.905274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.910067] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.209 [2024-07-13 08:03:20.910411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.209 [2024-07-13 08:03:20.910439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.209 [2024-07-13 08:03:20.915541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.915860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.915894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:20.920749] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.921100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.921124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:20.925966] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.926312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.926341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:20.931046] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.931387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.931414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:20.936363] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.936707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.936732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:20.941821] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.942162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.942202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:20.946951] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.947278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.947306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:20.952029] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.952375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.952406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:20.957476] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.957844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.957885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:20.962638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.962965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.962996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:20.967936] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.968245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.968290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:20.972809] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.973101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.973132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:20.977615] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.977937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.977967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:20.982739] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.983090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.983137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:20.988017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.988379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.988412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:20.993340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.993663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.993694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:20.998825] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:20.999137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:20.999184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:21.004180] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:21.004530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:21.004578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:21.009375] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:21.009706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:21.009737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:21.014723] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:21.015040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:21.015072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.210 [2024-07-13 08:03:21.020108] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.210 [2024-07-13 08:03:21.020468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.210 [2024-07-13 08:03:21.020500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.471 [2024-07-13 08:03:21.025653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.471 [2024-07-13 08:03:21.025985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.471 [2024-07-13 08:03:21.026018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.471 [2024-07-13 08:03:21.031094] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.471 [2024-07-13 08:03:21.031388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.471 [2024-07-13 08:03:21.031419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.471 [2024-07-13 08:03:21.036224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.471 [2024-07-13 08:03:21.036536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.471 [2024-07-13 08:03:21.036565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.471 [2024-07-13 08:03:21.041526] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.471 [2024-07-13 08:03:21.041847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.471 [2024-07-13 08:03:21.041885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.471 [2024-07-13 08:03:21.046741] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.471 [2024-07-13 08:03:21.047082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.471 [2024-07-13 08:03:21.047110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.471 [2024-07-13 08:03:21.051818] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.471 [2024-07-13 08:03:21.052168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.471 [2024-07-13 08:03:21.052200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.471 [2024-07-13 08:03:21.056622] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.471 [2024-07-13 08:03:21.056931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.056953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.061498] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.061804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.061841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.066270] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.066571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.066597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.071053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.071338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.071364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.075953] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.076250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.076276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.080876] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.081157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.081182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.085578] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.085905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.085932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.090427] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.090742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.090768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.095132] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.095407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.095433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.099940] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.100251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.100276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.104657] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.104967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.104993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.109631] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.109939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.109965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.114758] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.115084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.115110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.120491] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.120793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.120831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.126009] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.126360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.126388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.131399] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.131731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.131759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.137035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.137339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.137367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.142489] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.142838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.142866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.148110] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.148422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.148450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.154017] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.154370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.154398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.159461] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.159851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.159889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.165241] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.165514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.165555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.170974] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.171320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.171347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.176213] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.176547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.176589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.181346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.181651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.181676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.186361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.186719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.186745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.191459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.191747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.191783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.196409] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.196707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.196731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.201443] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.201738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.201764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.206373] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.206707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.206732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.211587] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.211882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.472 [2024-07-13 08:03:21.211908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.472 [2024-07-13 08:03:21.216678] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.472 [2024-07-13 08:03:21.217053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.473 [2024-07-13 08:03:21.217080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.473 [2024-07-13 08:03:21.222469] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.473 [2024-07-13 08:03:21.222800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.473 [2024-07-13 08:03:21.222834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.473 [2024-07-13 08:03:21.227699] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.473 [2024-07-13 08:03:21.228052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.473 [2024-07-13 08:03:21.228079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.473 [2024-07-13 08:03:21.232722] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.473 [2024-07-13 08:03:21.233020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.473 [2024-07-13 08:03:21.233045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.473 [2024-07-13 08:03:21.237613] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.473 [2024-07-13 08:03:21.237908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.473 [2024-07-13 08:03:21.237934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.473 [2024-07-13 08:03:21.242459] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.473 [2024-07-13 08:03:21.242804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.473 [2024-07-13 08:03:21.242839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.473 [2024-07-13 08:03:21.247406] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.473 [2024-07-13 08:03:21.247687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.473 [2024-07-13 08:03:21.247712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.473 [2024-07-13 08:03:21.252254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.473 [2024-07-13 08:03:21.252569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.473 [2024-07-13 08:03:21.252594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.473 [2024-07-13 08:03:21.256993] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.473 [2024-07-13 08:03:21.257282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.473 [2024-07-13 08:03:21.257306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.473 [2024-07-13 08:03:21.261857] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.473 [2024-07-13 08:03:21.262179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.473 [2024-07-13 08:03:21.262220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.473 [2024-07-13 08:03:21.266755] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.473 [2024-07-13 08:03:21.267048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.473 [2024-07-13 08:03:21.267072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.473 [2024-07-13 08:03:21.271653] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.473 [2024-07-13 08:03:21.271949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.473 [2024-07-13 08:03:21.271974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.473 [2024-07-13 08:03:21.276518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.473 [2024-07-13 08:03:21.276790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.473 [2024-07-13 08:03:21.276824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.473 [2024-07-13 08:03:21.281457] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.473 [2024-07-13 08:03:21.281798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.473 [2024-07-13 08:03:21.281831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.286898] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.287219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.287245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.291987] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.292296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.292321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.296782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.297059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.297084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.301636] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.301945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.301971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.306676] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.306977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.307002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.311609] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.311909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.311935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.316500] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.316770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.316789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.321224] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.321538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.321558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.326002] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.326342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.326369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.330879] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.331161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.331186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.335612] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.335910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.335936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.340340] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.340626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.340682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.345129] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.345420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.345446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.350059] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.350393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.350434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.354949] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.355236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.355261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.359769] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.360051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.360075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.364611] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.364909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.364936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.369372] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.369654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.369678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.374177] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.374464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.374534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.379117] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.379426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.379451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.734 [2024-07-13 08:03:21.384026] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.734 [2024-07-13 08:03:21.384315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.734 [2024-07-13 08:03:21.384341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.388826] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.389096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.389121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.393714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.393998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.394023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.398574] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.398857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.398892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.403447] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.403728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.403753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.408176] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.408455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.408509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.413071] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.413360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.413386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.417912] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.418229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.418256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.422811] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.423100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.423125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.427638] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.427935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.427961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.432446] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.432729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.432754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.437303] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.437597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.437623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.442013] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.442336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.442362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.446820] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.447134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.447190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.451646] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.451950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.451975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.456510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.456776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.456808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.461346] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.461631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.461656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.465991] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.466325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.466351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.470861] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.471162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.471203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.475707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.475989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.476014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.480424] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.480706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.480732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.485329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.485619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.485645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.490294] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.490632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.490689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.495276] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.495568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.495593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.500084] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.500372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.500397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.504858] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.505126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.505166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.509614] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.509910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.509936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.514616] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.514924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.514948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.519533] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.735 [2024-07-13 08:03:21.519833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.735 [2024-07-13 08:03:21.519866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.735 [2024-07-13 08:03:21.524518] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.736 [2024-07-13 08:03:21.524824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.736 [2024-07-13 08:03:21.524858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.736 [2024-07-13 08:03:21.529782] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.736 [2024-07-13 08:03:21.530094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.736 [2024-07-13 08:03:21.530172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.736 [2024-07-13 08:03:21.534772] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.736 [2024-07-13 08:03:21.535064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.736 [2024-07-13 08:03:21.535089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.736 [2024-07-13 08:03:21.539694] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.736 [2024-07-13 08:03:21.539975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.736 [2024-07-13 08:03:21.540000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.736 [2024-07-13 08:03:21.544714] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.736 [2024-07-13 08:03:21.545027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.736 [2024-07-13 08:03:21.545052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.996 [2024-07-13 08:03:21.549984] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.996 [2024-07-13 08:03:21.550312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.996 [2024-07-13 08:03:21.550340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.996 [2024-07-13 08:03:21.555299] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.996 [2024-07-13 08:03:21.555594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.996 [2024-07-13 08:03:21.555619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.996 [2024-07-13 08:03:21.560131] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.996 [2024-07-13 08:03:21.560443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.996 [2024-07-13 08:03:21.560469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.996 [2024-07-13 08:03:21.564961] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.996 [2024-07-13 08:03:21.565246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.996 [2024-07-13 08:03:21.565271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.996 [2024-07-13 08:03:21.569667] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.996 [2024-07-13 08:03:21.569967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.996 [2024-07-13 08:03:21.569993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.996 [2024-07-13 08:03:21.574664] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.574970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.574995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.579541] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.579826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.579860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.584247] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.584534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.584559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.589039] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.589334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.589359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.593846] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.594119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.594184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.598707] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.598986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.599011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.603562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.603860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.603888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.608329] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.608615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.608641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.613242] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.613561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.613602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.618113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.618457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.618498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.623200] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.623483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.623524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.628254] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.628549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.628574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.633328] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.633611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.633636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.638262] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.638620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.638677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.643331] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.643642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.643667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.648376] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.648700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.648757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.653819] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.654115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.654180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.659095] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.659424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.659452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.664288] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.664619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.664644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.669400] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.669724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.669749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.674562] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.674867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.674914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.679701] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.679979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.680004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.684890] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.685192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.685220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.689930] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.690256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.690284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.695049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.695384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.695413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.700160] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.700475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.700503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.705344] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.705666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.705690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.710492] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.710817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.710850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.715686] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.997 [2024-07-13 08:03:21.715983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.997 [2024-07-13 08:03:21.716008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.997 [2024-07-13 08:03:21.720705] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.721006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.721031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.725847] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.726125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.726192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.731003] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.731330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.731358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.736079] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.736424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.736452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.741281] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.741616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.741641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.746480] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.746772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.746804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.751556] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.751822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.751854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.756690] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.756987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.757013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.761808] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.762098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.762122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.766923] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.767232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.767260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.771985] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.772309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.772337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.776997] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.777310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.777338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.781980] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.782298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.782328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.787049] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.787387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.787417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.792070] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.792407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.792436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.797162] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.797523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.797598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.802322] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.802637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.802677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:15.998 [2024-07-13 08:03:21.807692] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:15.998 [2024-07-13 08:03:21.807970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:15.998 [2024-07-13 08:03:21.807995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.813099] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.258 [2024-07-13 08:03:21.813429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.258 [2024-07-13 08:03:21.813458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.818532] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.258 [2024-07-13 08:03:21.818845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.258 [2024-07-13 08:03:21.818879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.823687] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.258 [2024-07-13 08:03:21.823977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.258 [2024-07-13 08:03:21.824003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.828696] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.258 [2024-07-13 08:03:21.828999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.258 [2024-07-13 08:03:21.829024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.833774] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.258 [2024-07-13 08:03:21.834066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.258 [2024-07-13 08:03:21.834090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.838911] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.258 [2024-07-13 08:03:21.839218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.258 [2024-07-13 08:03:21.839247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.844036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.258 [2024-07-13 08:03:21.844372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.258 [2024-07-13 08:03:21.844400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.849081] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.258 [2024-07-13 08:03:21.849421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.258 [2024-07-13 08:03:21.849449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.854245] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.258 [2024-07-13 08:03:21.854585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.258 [2024-07-13 08:03:21.854610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.859348] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.258 [2024-07-13 08:03:21.859671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.258 [2024-07-13 08:03:21.859696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.864448] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.258 [2024-07-13 08:03:21.864769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.258 [2024-07-13 08:03:21.864801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.869555] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.258 [2024-07-13 08:03:21.869853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.258 [2024-07-13 08:03:21.869887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.874844] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.258 [2024-07-13 08:03:21.875138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.258 [2024-07-13 08:03:21.875180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.879932] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.258 [2024-07-13 08:03:21.880263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.258 [2024-07-13 08:03:21.880291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.885036] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.258 [2024-07-13 08:03:21.885368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.258 [2024-07-13 08:03:21.885397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.890072] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.258 [2024-07-13 08:03:21.890422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.258 [2024-07-13 08:03:21.890450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.258 [2024-07-13 08:03:21.895357] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.895680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.895705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.900950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.901256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.901284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.905957] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.906286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.906315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.911087] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.911414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.911443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.916113] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.916441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.916470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.921185] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.921496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.921554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.926361] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.926685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.926725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.931418] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.931729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.931754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.936510] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.936808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.936840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.941582] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.941866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.941901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.946715] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.947031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.947056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.951830] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.952111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.952135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.956920] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.957228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.957256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.962074] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.962427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.962456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.967197] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.967500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.967556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.972239] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.972565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.972606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.977440] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.977751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.977783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.982675] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.982980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.983020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.987850] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.988137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.988196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.993053] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.993392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.993420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:21.998123] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:21.998462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:21.998491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:22.003318] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:22.003674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:22.003699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:22.008421] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:22.008733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:22.008758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:22.013668] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:22.013966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:22.013993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:22.018802] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:22.019111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:22.019136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:22.023950] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:22.024264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:22.024292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:22.029035] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:22.029375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:22.029404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:22.034232] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:22.034575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:22.034601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:17:16.259 [2024-07-13 08:03:22.039304] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.259 [2024-07-13 08:03:22.039645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.259 [2024-07-13 08:03:22.039671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:16.260 [2024-07-13 08:03:22.044619] tcp.c:2034:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x7cdfd0) with pdu=0x2000190fef90 00:17:16.260 [2024-07-13 08:03:22.044863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:17:16.260 [2024-07-13 08:03:22.044900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:17:16.260 00:17:16.260 Latency(us) 00:17:16.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.260 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:17:16.260 nvme0n1 : 2.00 6104.24 763.03 0.00 0.00 2615.36 2085.24 10068.71 00:17:16.260 =================================================================================================================== 00:17:16.260 Total : 6104.24 763.03 0.00 0.00 2615.36 2085.24 10068.71 00:17:16.260 0 00:17:16.260 08:03:22 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:17:16.260 08:03:22 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:17:16.260 08:03:22 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:17:16.260 08:03:22 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:17:16.260 | .driver_specific 00:17:16.260 | .nvme_error 00:17:16.260 | .status_code 00:17:16.260 | .command_transient_transport_error' 00:17:16.835 08:03:22 -- host/digest.sh@71 -- # (( 394 > 0 )) 00:17:16.835 08:03:22 -- host/digest.sh@73 -- # killprocess 79433 00:17:16.835 08:03:22 -- common/autotest_common.sh@926 -- # '[' -z 79433 ']' 00:17:16.835 08:03:22 -- common/autotest_common.sh@930 -- # kill -0 79433 00:17:16.835 08:03:22 -- common/autotest_common.sh@931 -- # uname 00:17:16.835 08:03:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:16.835 08:03:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79433 00:17:16.835 08:03:22 -- common/autotest_common.sh@932 -- # process_name=reactor_1 00:17:16.835 08:03:22 -- common/autotest_common.sh@936 -- # '[' reactor_1 = sudo ']' 00:17:16.835 killing process with pid 79433 00:17:16.835 08:03:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79433' 00:17:16.835 Received shutdown signal, test time was about 2.000000 seconds 00:17:16.835 00:17:16.835 Latency(us) 00:17:16.835 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.835 =================================================================================================================== 00:17:16.835 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.835 08:03:22 -- common/autotest_common.sh@945 -- # kill 79433 00:17:16.835 08:03:22 -- common/autotest_common.sh@950 -- # wait 79433 00:17:16.835 08:03:22 -- host/digest.sh@115 -- # killprocess 79315 00:17:16.835 08:03:22 -- common/autotest_common.sh@926 -- # '[' -z 79315 ']' 00:17:16.835 08:03:22 -- common/autotest_common.sh@930 -- # kill -0 79315 00:17:16.835 08:03:22 -- common/autotest_common.sh@931 -- # uname 00:17:16.835 08:03:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:17:16.835 08:03:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79315 00:17:16.835 08:03:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:17:16.835 08:03:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:17:16.835 killing process with pid 79315 00:17:16.835 08:03:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79315' 00:17:16.835 08:03:22 -- common/autotest_common.sh@945 -- # kill 79315 00:17:16.835 08:03:22 -- common/autotest_common.sh@950 -- # wait 79315 00:17:17.120 00:17:17.120 real 0m16.602s 00:17:17.120 user 0m32.679s 00:17:17.120 sys 0m4.520s 00:17:17.120 08:03:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.120 08:03:22 -- common/autotest_common.sh@10 -- # set +x 00:17:17.120 ************************************ 00:17:17.120 END TEST nvmf_digest_error 00:17:17.120 ************************************ 00:17:17.120 08:03:22 -- host/digest.sh@138 -- # trap - SIGINT SIGTERM EXIT 00:17:17.120 08:03:22 -- host/digest.sh@139 -- # nvmftestfini 00:17:17.120 08:03:22 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:17.120 08:03:22 -- nvmf/common.sh@116 -- # sync 00:17:17.379 08:03:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:17:17.379 08:03:23 -- nvmf/common.sh@119 -- # set +e 00:17:17.379 08:03:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:17.379 08:03:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:17:17.379 rmmod nvme_tcp 00:17:17.379 rmmod nvme_fabrics 00:17:17.379 rmmod nvme_keyring 00:17:17.379 08:03:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:17.379 08:03:23 -- nvmf/common.sh@123 -- # set -e 00:17:17.379 08:03:23 -- nvmf/common.sh@124 -- # return 0 00:17:17.379 08:03:23 -- nvmf/common.sh@477 -- # '[' -n 79315 ']' 00:17:17.379 08:03:23 -- nvmf/common.sh@478 -- # killprocess 79315 00:17:17.379 08:03:23 -- common/autotest_common.sh@926 -- # '[' -z 79315 ']' 00:17:17.379 08:03:23 -- common/autotest_common.sh@930 -- # kill -0 79315 00:17:17.379 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (79315) - No such process 00:17:17.379 Process with pid 79315 is not found 00:17:17.379 08:03:23 -- common/autotest_common.sh@953 -- # echo 'Process with pid 79315 is not found' 00:17:17.379 08:03:23 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:17.379 08:03:23 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:17:17.379 08:03:23 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:17:17.379 08:03:23 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:17.379 08:03:23 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:17:17.379 08:03:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.379 08:03:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.379 08:03:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.638 08:03:23 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:17:17.638 00:17:17.638 real 0m32.370s 00:17:17.638 user 1m1.148s 00:17:17.638 sys 0m9.277s 00:17:17.638 ************************************ 00:17:17.638 END TEST nvmf_digest 00:17:17.638 ************************************ 00:17:17.638 08:03:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:17.638 08:03:23 -- common/autotest_common.sh@10 -- # set +x 00:17:17.638 08:03:23 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:17:17.638 08:03:23 -- nvmf/nvmf.sh@115 -- # [[ 1 -eq 1 ]] 00:17:17.638 08:03:23 -- nvmf/nvmf.sh@116 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:17.638 08:03:23 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:17:17.638 08:03:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:17:17.638 08:03:23 -- common/autotest_common.sh@10 -- # set +x 00:17:17.638 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:17:17.638 ************************************ 00:17:17.638 START TEST nvmf_multipath 00:17:17.638 ************************************ 00:17:17.638 08:03:23 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:17:17.638 * Looking for test storage... 00:17:17.638 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:17:17.638 08:03:23 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:17.638 08:03:23 -- nvmf/common.sh@7 -- # uname -s 00:17:17.638 08:03:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.638 08:03:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.638 08:03:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.638 08:03:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.638 08:03:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.638 08:03:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.638 08:03:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.638 08:03:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.638 08:03:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.638 08:03:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.638 08:03:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:17:17.638 08:03:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:17:17.638 08:03:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.638 08:03:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.638 08:03:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:17:17.638 08:03:23 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:17.638 08:03:23 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.638 08:03:23 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.638 08:03:23 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.638 08:03:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.638 08:03:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.639 08:03:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.639 08:03:23 -- paths/export.sh@5 -- # export PATH 00:17:17.639 08:03:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.639 08:03:23 -- nvmf/common.sh@46 -- # : 0 00:17:17.639 08:03:23 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:17.639 08:03:23 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:17.639 08:03:23 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:17.639 08:03:23 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.639 08:03:23 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.639 08:03:23 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:17.639 08:03:23 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:17.639 08:03:23 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:17.639 08:03:23 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:17.639 08:03:23 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:17.639 08:03:23 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:17.639 08:03:23 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:17:17.639 08:03:23 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:17.639 08:03:23 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:17:17.639 08:03:23 -- host/multipath.sh@30 -- # nvmftestinit 00:17:17.639 08:03:23 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:17:17.639 08:03:23 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:17.639 08:03:23 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:17.639 08:03:23 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:17.639 08:03:23 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:17.639 08:03:23 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:17.639 08:03:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:17.639 08:03:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:17.639 08:03:23 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:17:17.639 08:03:23 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:17:17.639 08:03:23 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:17:17.639 08:03:23 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:17:17.639 08:03:23 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:17:17.639 08:03:23 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:17:17.639 08:03:23 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.639 08:03:23 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.639 08:03:23 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:17:17.639 08:03:23 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:17:17.639 08:03:23 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:17:17.639 08:03:23 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:17:17.639 08:03:23 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:17:17.639 08:03:23 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.639 08:03:23 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:17:17.639 08:03:23 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:17:17.639 08:03:23 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:17:17.639 08:03:23 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:17:17.639 08:03:23 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:17:17.639 08:03:23 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:17:17.639 Cannot find device "nvmf_tgt_br" 00:17:17.639 08:03:23 -- nvmf/common.sh@154 -- # true 00:17:17.639 08:03:23 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:17:17.898 Cannot find device "nvmf_tgt_br2" 00:17:17.898 08:03:23 -- nvmf/common.sh@155 -- # true 00:17:17.898 08:03:23 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:17:17.898 08:03:23 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:17:17.898 Cannot find device "nvmf_tgt_br" 00:17:17.898 08:03:23 -- nvmf/common.sh@157 -- # true 00:17:17.898 08:03:23 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:17:17.898 Cannot find device "nvmf_tgt_br2" 00:17:17.898 08:03:23 -- nvmf/common.sh@158 -- # true 00:17:17.898 08:03:23 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:17:17.898 08:03:23 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:17:17.898 08:03:23 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:17:17.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.898 08:03:23 -- nvmf/common.sh@161 -- # true 00:17:17.898 08:03:23 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:17:17.898 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:17:17.898 08:03:23 -- nvmf/common.sh@162 -- # true 00:17:17.898 08:03:23 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:17:17.898 08:03:23 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:17:17.898 08:03:23 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:17:17.898 08:03:23 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:17:17.898 08:03:23 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:17:17.898 08:03:23 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:17:17.898 08:03:23 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:17:17.898 08:03:23 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:17:17.898 08:03:23 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:17:17.898 08:03:23 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:17:17.898 08:03:23 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:17:17.898 08:03:23 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:17:17.898 08:03:23 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:17:17.898 08:03:23 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:17:17.898 08:03:23 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:17:17.898 08:03:23 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:17:17.898 08:03:23 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:17:17.898 08:03:23 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:17:17.898 08:03:23 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:17:17.898 08:03:23 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:17:17.898 08:03:23 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:17:18.156 08:03:23 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:17:18.156 08:03:23 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:17:18.156 08:03:23 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:17:18.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:18.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.101 ms 00:17:18.156 00:17:18.156 --- 10.0.0.2 ping statistics --- 00:17:18.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.156 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:17:18.156 08:03:23 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:17:18.156 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:17:18.156 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.107 ms 00:17:18.156 00:17:18.156 --- 10.0.0.3 ping statistics --- 00:17:18.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.156 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:17:18.156 08:03:23 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:17:18.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:18.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.039 ms 00:17:18.156 00:17:18.156 --- 10.0.0.1 ping statistics --- 00:17:18.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:18.156 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:17:18.157 08:03:23 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:18.157 08:03:23 -- nvmf/common.sh@421 -- # return 0 00:17:18.157 08:03:23 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:18.157 08:03:23 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:18.157 08:03:23 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:17:18.157 08:03:23 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:17:18.157 08:03:23 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:18.157 08:03:23 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:17:18.157 08:03:23 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:17:18.157 08:03:23 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:17:18.157 08:03:23 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:18.157 08:03:23 -- common/autotest_common.sh@712 -- # xtrace_disable 00:17:18.157 08:03:23 -- common/autotest_common.sh@10 -- # set +x 00:17:18.157 08:03:23 -- nvmf/common.sh@469 -- # nvmfpid=79670 00:17:18.157 08:03:23 -- nvmf/common.sh@470 -- # waitforlisten 79670 00:17:18.157 08:03:23 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:18.157 08:03:23 -- common/autotest_common.sh@819 -- # '[' -z 79670 ']' 00:17:18.157 08:03:23 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.157 08:03:23 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:18.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.157 08:03:23 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.157 08:03:23 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:18.157 08:03:23 -- common/autotest_common.sh@10 -- # set +x 00:17:18.157 [2024-07-13 08:03:23.822976] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:17:18.157 [2024-07-13 08:03:23.823081] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:18.157 [2024-07-13 08:03:23.965506] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:18.415 [2024-07-13 08:03:24.006724] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:18.415 [2024-07-13 08:03:24.006908] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:18.415 [2024-07-13 08:03:24.006926] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:18.415 [2024-07-13 08:03:24.006937] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:18.415 [2024-07-13 08:03:24.007112] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:18.415 [2024-07-13 08:03:24.007126] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:18.981 08:03:24 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:18.981 08:03:24 -- common/autotest_common.sh@852 -- # return 0 00:17:18.981 08:03:24 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:18.981 08:03:24 -- common/autotest_common.sh@718 -- # xtrace_disable 00:17:18.981 08:03:24 -- common/autotest_common.sh@10 -- # set +x 00:17:19.238 08:03:24 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.238 08:03:24 -- host/multipath.sh@33 -- # nvmfapp_pid=79670 00:17:19.238 08:03:24 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:19.496 [2024-07-13 08:03:25.073159] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.496 08:03:25 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:17:19.496 Malloc0 00:17:19.754 08:03:25 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:17:19.754 08:03:25 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:20.011 08:03:25 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:20.269 [2024-07-13 08:03:25.956889] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:20.269 08:03:25 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:20.527 [2024-07-13 08:03:26.149013] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:20.527 08:03:26 -- host/multipath.sh@44 -- # bdevperf_pid=79708 00:17:20.527 08:03:26 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:17:20.527 08:03:26 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:20.527 08:03:26 -- host/multipath.sh@47 -- # waitforlisten 79708 /var/tmp/bdevperf.sock 00:17:20.527 08:03:26 -- common/autotest_common.sh@819 -- # '[' -z 79708 ']' 00:17:20.527 08:03:26 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:20.527 08:03:26 -- common/autotest_common.sh@824 -- # local max_retries=100 00:17:20.527 08:03:26 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:20.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:20.527 08:03:26 -- common/autotest_common.sh@828 -- # xtrace_disable 00:17:20.527 08:03:26 -- common/autotest_common.sh@10 -- # set +x 00:17:21.460 08:03:27 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:17:21.460 08:03:27 -- common/autotest_common.sh@852 -- # return 0 00:17:21.460 08:03:27 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:17:21.718 08:03:27 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:17:21.976 Nvme0n1 00:17:21.976 08:03:27 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:17:22.235 Nvme0n1 00:17:22.235 08:03:28 -- host/multipath.sh@78 -- # sleep 1 00:17:22.235 08:03:28 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:17:23.608 08:03:29 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:17:23.608 08:03:29 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:23.608 08:03:29 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:23.867 08:03:29 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:17:23.867 08:03:29 -- host/multipath.sh@65 -- # dtrace_pid=79735 00:17:23.867 08:03:29 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79670 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:23.867 08:03:29 -- host/multipath.sh@66 -- # sleep 6 00:17:30.431 08:03:35 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:30.431 08:03:35 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:30.431 08:03:35 -- host/multipath.sh@67 -- # active_port=4421 00:17:30.431 08:03:35 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:30.431 Attaching 4 probes... 00:17:30.431 @path[10.0.0.2, 4421]: 19701 00:17:30.431 @path[10.0.0.2, 4421]: 17015 00:17:30.431 @path[10.0.0.2, 4421]: 17069 00:17:30.431 @path[10.0.0.2, 4421]: 17171 00:17:30.431 @path[10.0.0.2, 4421]: 17178 00:17:30.431 08:03:35 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:30.431 08:03:35 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:30.431 08:03:35 -- host/multipath.sh@69 -- # sed -n 1p 00:17:30.431 08:03:35 -- host/multipath.sh@69 -- # port=4421 00:17:30.431 08:03:35 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:30.431 08:03:35 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:30.431 08:03:35 -- host/multipath.sh@72 -- # kill 79735 00:17:30.431 08:03:35 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:30.431 08:03:35 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:17:30.431 08:03:35 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:30.431 08:03:36 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:30.690 08:03:36 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:17:30.690 08:03:36 -- host/multipath.sh@65 -- # dtrace_pid=79811 00:17:30.690 08:03:36 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79670 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:30.690 08:03:36 -- host/multipath.sh@66 -- # sleep 6 00:17:37.255 08:03:42 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:37.255 08:03:42 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:17:37.255 08:03:42 -- host/multipath.sh@67 -- # active_port=4420 00:17:37.255 08:03:42 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:37.255 Attaching 4 probes... 00:17:37.255 @path[10.0.0.2, 4420]: 17094 00:17:37.255 @path[10.0.0.2, 4420]: 17336 00:17:37.255 @path[10.0.0.2, 4420]: 17382 00:17:37.255 @path[10.0.0.2, 4420]: 17388 00:17:37.255 @path[10.0.0.2, 4420]: 17284 00:17:37.255 08:03:42 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:37.255 08:03:42 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:37.255 08:03:42 -- host/multipath.sh@69 -- # sed -n 1p 00:17:37.255 08:03:42 -- host/multipath.sh@69 -- # port=4420 00:17:37.255 08:03:42 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:17:37.255 08:03:42 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:17:37.255 08:03:42 -- host/multipath.sh@72 -- # kill 79811 00:17:37.255 08:03:42 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:37.255 08:03:42 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:17:37.255 08:03:42 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:37.255 08:03:42 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:37.511 08:03:43 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:17:37.511 08:03:43 -- host/multipath.sh@65 -- # dtrace_pid=79887 00:17:37.511 08:03:43 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79670 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:37.511 08:03:43 -- host/multipath.sh@66 -- # sleep 6 00:17:44.097 08:03:49 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:44.097 08:03:49 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:44.097 08:03:49 -- host/multipath.sh@67 -- # active_port=4421 00:17:44.097 08:03:49 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:44.097 Attaching 4 probes... 00:17:44.097 @path[10.0.0.2, 4421]: 13132 00:17:44.097 @path[10.0.0.2, 4421]: 16957 00:17:44.097 @path[10.0.0.2, 4421]: 16966 00:17:44.097 @path[10.0.0.2, 4421]: 16898 00:17:44.097 @path[10.0.0.2, 4421]: 17009 00:17:44.097 08:03:49 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:44.097 08:03:49 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:44.097 08:03:49 -- host/multipath.sh@69 -- # sed -n 1p 00:17:44.097 08:03:49 -- host/multipath.sh@69 -- # port=4421 00:17:44.097 08:03:49 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:44.097 08:03:49 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:44.098 08:03:49 -- host/multipath.sh@72 -- # kill 79887 00:17:44.098 08:03:49 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:44.098 08:03:49 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:17:44.098 08:03:49 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:17:44.098 08:03:49 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:17:44.356 08:03:49 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:17:44.356 08:03:49 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79670 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:44.356 08:03:49 -- host/multipath.sh@65 -- # dtrace_pid=79961 00:17:44.356 08:03:49 -- host/multipath.sh@66 -- # sleep 6 00:17:50.914 08:03:55 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:50.914 08:03:55 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:17:50.914 08:03:56 -- host/multipath.sh@67 -- # active_port= 00:17:50.914 08:03:56 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:50.914 Attaching 4 probes... 00:17:50.914 00:17:50.914 00:17:50.914 00:17:50.914 00:17:50.914 00:17:50.914 08:03:56 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:50.914 08:03:56 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:50.914 08:03:56 -- host/multipath.sh@69 -- # sed -n 1p 00:17:50.914 08:03:56 -- host/multipath.sh@69 -- # port= 00:17:50.914 08:03:56 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:17:50.914 08:03:56 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:17:50.914 08:03:56 -- host/multipath.sh@72 -- # kill 79961 00:17:50.914 08:03:56 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:50.914 08:03:56 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:17:50.915 08:03:56 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:17:50.915 08:03:56 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:17:51.171 08:03:56 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:17:51.171 08:03:56 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79670 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:51.171 08:03:56 -- host/multipath.sh@65 -- # dtrace_pid=80037 00:17:51.171 08:03:56 -- host/multipath.sh@66 -- # sleep 6 00:17:57.729 08:04:02 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:17:57.729 08:04:02 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:17:57.729 08:04:03 -- host/multipath.sh@67 -- # active_port=4421 00:17:57.729 08:04:03 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:57.729 Attaching 4 probes... 00:17:57.729 @path[10.0.0.2, 4421]: 16359 00:17:57.729 @path[10.0.0.2, 4421]: 16727 00:17:57.729 @path[10.0.0.2, 4421]: 16529 00:17:57.729 @path[10.0.0.2, 4421]: 16627 00:17:57.729 @path[10.0.0.2, 4421]: 16617 00:17:57.729 08:04:03 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:17:57.729 08:04:03 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:17:57.729 08:04:03 -- host/multipath.sh@69 -- # sed -n 1p 00:17:57.729 08:04:03 -- host/multipath.sh@69 -- # port=4421 00:17:57.729 08:04:03 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:17:57.729 08:04:03 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:17:57.729 08:04:03 -- host/multipath.sh@72 -- # kill 80037 00:17:57.729 08:04:03 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:17:57.729 08:04:03 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:57.729 [2024-07-13 08:04:03.306132] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306214] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306235] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306243] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306252] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306261] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306270] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306278] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306287] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306295] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306304] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306313] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306321] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306338] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306347] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306373] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306390] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306415] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306466] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306474] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 [2024-07-13 08:04:03.306483] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x79e610 is same with the state(5) to be set 00:17:57.729 08:04:03 -- host/multipath.sh@101 -- # sleep 1 00:17:58.661 08:04:04 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:17:58.661 08:04:04 -- host/multipath.sh@65 -- # dtrace_pid=80118 00:17:58.661 08:04:04 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79670 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:17:58.661 08:04:04 -- host/multipath.sh@66 -- # sleep 6 00:18:05.292 08:04:10 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:05.292 08:04:10 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:18:05.292 08:04:10 -- host/multipath.sh@67 -- # active_port=4420 00:18:05.292 08:04:10 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:05.292 Attaching 4 probes... 00:18:05.292 @path[10.0.0.2, 4420]: 16326 00:18:05.292 @path[10.0.0.2, 4420]: 16625 00:18:05.292 @path[10.0.0.2, 4420]: 16607 00:18:05.292 @path[10.0.0.2, 4420]: 16593 00:18:05.292 @path[10.0.0.2, 4420]: 16686 00:18:05.292 08:04:10 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:05.292 08:04:10 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:05.292 08:04:10 -- host/multipath.sh@69 -- # sed -n 1p 00:18:05.292 08:04:10 -- host/multipath.sh@69 -- # port=4420 00:18:05.292 08:04:10 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:18:05.292 08:04:10 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:18:05.292 08:04:10 -- host/multipath.sh@72 -- # kill 80118 00:18:05.292 08:04:10 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:05.292 08:04:10 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:05.292 [2024-07-13 08:04:10.869313] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:05.292 08:04:10 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:18:05.550 08:04:11 -- host/multipath.sh@111 -- # sleep 6 00:18:12.109 08:04:17 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:18:12.109 08:04:17 -- host/multipath.sh@65 -- # dtrace_pid=80220 00:18:12.109 08:04:17 -- host/multipath.sh@66 -- # sleep 6 00:18:12.109 08:04:17 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 79670 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:18:17.374 08:04:23 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:18:17.374 08:04:23 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:18:17.632 08:04:23 -- host/multipath.sh@67 -- # active_port=4421 00:18:17.632 08:04:23 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:17.632 Attaching 4 probes... 00:18:17.632 @path[10.0.0.2, 4421]: 16172 00:18:17.632 @path[10.0.0.2, 4421]: 16347 00:18:17.632 @path[10.0.0.2, 4421]: 16391 00:18:17.632 @path[10.0.0.2, 4421]: 16428 00:18:17.632 @path[10.0.0.2, 4421]: 16451 00:18:17.632 08:04:23 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:18:17.632 08:04:23 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:18:17.632 08:04:23 -- host/multipath.sh@69 -- # sed -n 1p 00:18:17.632 08:04:23 -- host/multipath.sh@69 -- # port=4421 00:18:17.632 08:04:23 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:18:17.632 08:04:23 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:18:17.632 08:04:23 -- host/multipath.sh@72 -- # kill 80220 00:18:17.632 08:04:23 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:18:17.903 08:04:23 -- host/multipath.sh@114 -- # killprocess 79708 00:18:17.903 08:04:23 -- common/autotest_common.sh@926 -- # '[' -z 79708 ']' 00:18:17.903 08:04:23 -- common/autotest_common.sh@930 -- # kill -0 79708 00:18:17.903 08:04:23 -- common/autotest_common.sh@931 -- # uname 00:18:17.903 08:04:23 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:17.903 08:04:23 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79708 00:18:17.903 08:04:23 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:17.903 08:04:23 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:17.903 08:04:23 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79708' 00:18:17.903 killing process with pid 79708 00:18:17.903 08:04:23 -- common/autotest_common.sh@945 -- # kill 79708 00:18:17.903 08:04:23 -- common/autotest_common.sh@950 -- # wait 79708 00:18:17.903 Connection closed with partial response: 00:18:17.903 00:18:17.903 00:18:17.903 08:04:23 -- host/multipath.sh@116 -- # wait 79708 00:18:17.903 08:04:23 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:17.903 [2024-07-13 08:03:26.216079] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:17.903 [2024-07-13 08:03:26.216274] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79708 ] 00:18:17.903 [2024-07-13 08:03:26.354990] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.903 [2024-07-13 08:03:26.386919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.903 Running I/O for 90 seconds... 00:18:17.903 [2024-07-13 08:03:36.304107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:17104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.903 [2024-07-13 08:03:36.304203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.304273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:17112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.903 [2024-07-13 08:03:36.304294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.304318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.903 [2024-07-13 08:03:36.304334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.304356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.903 [2024-07-13 08:03:36.304370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.304392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:17136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.903 [2024-07-13 08:03:36.304406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.304427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.903 [2024-07-13 08:03:36.304442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.304463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.903 [2024-07-13 08:03:36.304477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.304513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.903 [2024-07-13 08:03:36.304527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.304548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.903 [2024-07-13 08:03:36.304563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.304583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.903 [2024-07-13 08:03:36.304598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.304640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.903 [2024-07-13 08:03:36.304714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.304739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.903 [2024-07-13 08:03:36.304753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.304773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.903 [2024-07-13 08:03:36.304787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.304807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:17144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.903 [2024-07-13 08:03:36.304821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.304855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.903 [2024-07-13 08:03:36.304906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.304943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:17160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.903 [2024-07-13 08:03:36.304972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.305023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.903 [2024-07-13 08:03:36.305038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.305060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.903 [2024-07-13 08:03:36.305075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:17.903 [2024-07-13 08:03:36.305429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.904 [2024-07-13 08:03:36.305456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.305482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:17192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.904 [2024-07-13 08:03:36.305499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.305520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.305535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.305555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:17208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.904 [2024-07-13 08:03:36.305570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.305591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:17216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.904 [2024-07-13 08:03:36.305621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.305669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.305685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.305722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.305738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.305759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:16536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.305775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.305797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.305812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.305834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:16560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.305849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.305884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.305902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.305924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.305939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.305976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.305991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:16608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:16616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:17240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:17248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:17264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.904 [2024-07-13 08:03:36.306345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:17288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.904 [2024-07-13 08:03:36.306381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.904 [2024-07-13 08:03:36.306418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:17304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.904 [2024-07-13 08:03:36.306492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:16656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:16664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:16720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:17328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.306956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:17336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.306971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.307012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.904 [2024-07-13 08:03:36.307028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.307049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.904 [2024-07-13 08:03:36.307065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.307086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.904 [2024-07-13 08:03:36.307101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.307122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:17368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.307137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.307160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:17376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.904 [2024-07-13 08:03:36.307175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.904 [2024-07-13 08:03:36.307196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.904 [2024-07-13 08:03:36.307230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:17392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.905 [2024-07-13 08:03:36.307268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.307305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:17408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.905 [2024-07-13 08:03:36.307342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.905 [2024-07-13 08:03:36.307393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.307446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:17432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.307482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.905 [2024-07-13 08:03:36.307518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.307554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.307591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.307648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.307686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:17480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.905 [2024-07-13 08:03:36.307747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.905 [2024-07-13 08:03:36.307796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.307846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.307888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:16832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.307925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:16840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.307962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.307984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:16856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:16880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:16912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:17520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:17528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.905 [2024-07-13 08:03:36.308334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:17536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.905 [2024-07-13 08:03:36.308370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:17544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:17560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:17568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:17576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.905 [2024-07-13 08:03:36.308551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:17584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.905 [2024-07-13 08:03:36.308588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.905 [2024-07-13 08:03:36.308668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:17608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.905 [2024-07-13 08:03:36.308706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:17616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.905 [2024-07-13 08:03:36.308754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.905 [2024-07-13 08:03:36.308953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:17.905 [2024-07-13 08:03:36.308975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:16976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.308989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:17024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.906 [2024-07-13 08:03:36.309170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.906 [2024-07-13 08:03:36.309207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:17648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.906 [2024-07-13 08:03:36.309251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:17656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:17664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.906 [2024-07-13 08:03:36.309327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:17680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.906 [2024-07-13 08:03:36.309402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:17032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:17040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:17048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.309965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:17096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:36.309983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:36.310006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.906 [2024-07-13 08:03:36.310021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:42.851391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.906 [2024-07-13 08:03:42.851465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:42.851501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.906 [2024-07-13 08:03:42.851535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:42.851571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:42.851585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:42.851607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:42.851621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:42.851642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:42.851656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:42.851699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:42.851714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:42.851735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:42.851749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:42.851770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:42.851784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:42.851820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:42.851849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:42.851873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:42.851889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:42.851910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:42.851925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:42.851946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.906 [2024-07-13 08:03:42.851960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:42.851982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:42.851996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:42.852018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.906 [2024-07-13 08:03:42.852033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:17.906 [2024-07-13 08:03:42.852068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.906 [2024-07-13 08:03:42.852098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.907 [2024-07-13 08:03:42.852135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.907 [2024-07-13 08:03:42.852185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.907 [2024-07-13 08:03:42.852249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.907 [2024-07-13 08:03:42.852286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.907 [2024-07-13 08:03:42.852322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.907 [2024-07-13 08:03:42.852358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.907 [2024-07-13 08:03:42.852416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.907 [2024-07-13 08:03:42.852469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.907 [2024-07-13 08:03:42.852522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.907 [2024-07-13 08:03:42.852571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.907 [2024-07-13 08:03:42.852624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.907 [2024-07-13 08:03:42.852660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.907 [2024-07-13 08:03:42.852696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.907 [2024-07-13 08:03:42.852740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.907 [2024-07-13 08:03:42.852788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.907 [2024-07-13 08:03:42.852827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.907 [2024-07-13 08:03:42.852894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.907 [2024-07-13 08:03:42.852975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.852999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.907 [2024-07-13 08:03:42.853015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.853036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.907 [2024-07-13 08:03:42.853051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.853074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.907 [2024-07-13 08:03:42.853088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.853110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.907 [2024-07-13 08:03:42.853125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.853146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.907 [2024-07-13 08:03:42.853162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:17.907 [2024-07-13 08:03:42.853184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.907 [2024-07-13 08:03:42.853200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.853236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.908 [2024-07-13 08:03:42.853272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.853315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.908 [2024-07-13 08:03:42.853353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.908 [2024-07-13 08:03:42.853390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.853426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.908 [2024-07-13 08:03:42.853461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.908 [2024-07-13 08:03:42.853498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.853533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.908 [2024-07-13 08:03:42.853570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.853605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.853642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.908 [2024-07-13 08:03:42.853678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.853714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.908 [2024-07-13 08:03:42.853750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.908 [2024-07-13 08:03:42.853805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.853844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.853882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.853918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.853958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.853980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.853995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.854031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.854067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.854104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.854150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.854190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.908 [2024-07-13 08:03:42.854247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.854302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.908 [2024-07-13 08:03:42.854341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.854378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.854415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.908 [2024-07-13 08:03:42.854453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.854490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.908 [2024-07-13 08:03:42.854526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.854579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.908 [2024-07-13 08:03:42.854614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.908 [2024-07-13 08:03:42.854649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.854684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.854718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.854760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.908 [2024-07-13 08:03:42.854827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:17.908 [2024-07-13 08:03:42.854851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.854867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.854888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.909 [2024-07-13 08:03:42.854903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.854925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.909 [2024-07-13 08:03:42.854943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.854966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.854981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.909 [2024-07-13 08:03:42.855054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.909 [2024-07-13 08:03:42.855496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.909 [2024-07-13 08:03:42.855680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.909 [2024-07-13 08:03:42.855717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.909 [2024-07-13 08:03:42.855753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.909 [2024-07-13 08:03:42.855797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.909 [2024-07-13 08:03:42.855843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.909 [2024-07-13 08:03:42.855959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.855981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.855995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.856017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.856032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.856053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.856068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.856089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.856104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.856126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.856141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.856163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.856178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.856200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.856232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.857766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.857810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.857868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.857887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.857910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.909 [2024-07-13 08:03:42.857925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.857947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.909 [2024-07-13 08:03:42.857962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.857984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.857999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.858020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.858035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:17.909 [2024-07-13 08:03:42.858057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.909 [2024-07-13 08:03:42.858072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.858094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.858108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.858130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.858156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.858180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.858196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.858217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.858232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.858254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.858269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.858291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.858317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.858341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.858357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.858556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.858582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.858609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.858627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.858650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.858665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.858701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.858732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.858753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.858783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.858851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.858883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.858908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.858924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.858945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.858960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.858982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.858997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.859033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.859080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.859119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.859155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.859206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.859241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.859279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.859316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.859368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.859406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.859443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.859479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.859521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.859564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.859602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.859639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.859675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.859712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.859748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.859793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.859844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.910 [2024-07-13 08:03:42.859881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.859920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.859958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:17.910 [2024-07-13 08:03:42.859979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.910 [2024-07-13 08:03:42.859995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.860032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.860089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.860140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.860176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.860211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.911 [2024-07-13 08:03:42.860246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.860297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.911 [2024-07-13 08:03:42.860349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.911 [2024-07-13 08:03:42.860421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.860459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.911 [2024-07-13 08:03:42.860496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.911 [2024-07-13 08:03:42.860532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.860569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.911 [2024-07-13 08:03:42.860626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.860664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.860701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.911 [2024-07-13 08:03:42.860768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.860817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.911 [2024-07-13 08:03:42.860858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.911 [2024-07-13 08:03:42.860895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.860931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.860967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.860989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.861004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.861040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.861076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.861126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.861165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.861202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.861238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.861276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.911 [2024-07-13 08:03:42.861313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.861352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.911 [2024-07-13 08:03:42.861395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.861432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.861469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.911 [2024-07-13 08:03:42.861505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.861543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.911 [2024-07-13 08:03:42.861592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.911 [2024-07-13 08:03:42.861631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:17.911 [2024-07-13 08:03:42.861653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.911 [2024-07-13 08:03:42.861668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.861690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.912 [2024-07-13 08:03:42.861705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.861727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.861756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.861777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.861804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.861828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.861843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.861863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.861878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.861899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.861913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.861934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.912 [2024-07-13 08:03:42.861949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.861969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.912 [2024-07-13 08:03:42.861984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.862036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.862073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.912 [2024-07-13 08:03:42.862124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.862173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.862210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.862247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.862282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.862319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.862355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.862391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.862428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.862464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.912 [2024-07-13 08:03:42.862506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.862543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.862588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.862624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.862676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.912 [2024-07-13 08:03:42.862716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.912 [2024-07-13 08:03:42.862752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.912 [2024-07-13 08:03:42.862797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.912 [2024-07-13 08:03:42.862853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.912 [2024-07-13 08:03:42.862890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.862939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.862969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.863008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.863023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.863045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.912 [2024-07-13 08:03:42.863060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.863081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.863096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.863118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.863140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.863163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.863179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.863201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.863216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:17.912 [2024-07-13 08:03:42.863237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.912 [2024-07-13 08:03:42.863252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.863274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.863289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.863310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.863328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.863350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.863365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.863389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.863406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.863427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.863442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.863464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.863478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.863500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.863515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.863536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.863551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.863572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.863593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.863617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.863632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.863653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.863668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.863691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.863706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.863727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.863742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.863763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.863778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.863800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.863815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.865548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.865576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.865619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:97128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.865636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.865658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.865674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.865710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.865725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.865763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.865778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.865800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.865815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.865863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.865881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.865903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.865918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.865939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.865954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.865975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.865990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.866026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:97144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.866062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.866098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.866134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:97168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.866184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.866220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:97184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.866256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:97192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.866293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:97200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.866339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.866376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:97216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.866433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:97224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.866471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.866508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:97240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.866544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.913 [2024-07-13 08:03:42.866580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:97256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.913 [2024-07-13 08:03:42.866616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:17.913 [2024-07-13 08:03:42.866637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.866652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.866674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:97272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.914 [2024-07-13 08:03:42.866688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.866710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.866724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.866746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.866760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.866793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:97296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.914 [2024-07-13 08:03:42.866821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.866844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:97304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.914 [2024-07-13 08:03:42.866860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.866882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.866897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.866918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.866933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.866955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.866970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.866992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.867021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.867057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.867093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.867144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.867195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:97312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.914 [2024-07-13 08:03:42.867252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.867288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:97328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.914 [2024-07-13 08:03:42.867331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:97336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.914 [2024-07-13 08:03:42.867369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.867405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.914 [2024-07-13 08:03:42.867442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:97360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.914 [2024-07-13 08:03:42.867478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.867513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:97376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.914 [2024-07-13 08:03:42.867550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.867586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.867623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:97400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.914 [2024-07-13 08:03:42.867659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.867696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.867718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:97416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.914 [2024-07-13 08:03:42.875731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.875808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.914 [2024-07-13 08:03:42.875833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.875873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.875890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.875912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.875927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.875951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.875967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.875988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.876003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.876024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.876039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.876061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.876076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.876097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.876112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.876133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.876148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.876184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.876198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.876218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.876233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.876254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.914 [2024-07-13 08:03:42.876268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.876289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.914 [2024-07-13 08:03:42.876303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:17.914 [2024-07-13 08:03:42.876347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:97464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.914 [2024-07-13 08:03:42.876364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.876400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.876436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.876457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.876472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.876494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.915 [2024-07-13 08:03:42.876508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.876530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.876544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.876565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:97504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.915 [2024-07-13 08:03:42.876580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.876602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.876617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.876639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:97520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.915 [2024-07-13 08:03:42.876669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.876705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:97528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.915 [2024-07-13 08:03:42.876720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.876743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.876757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.876793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.876824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.876846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.876861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.876889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.876918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.876942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.876957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.876979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:97576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.915 [2024-07-13 08:03:42.876994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:97584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.915 [2024-07-13 08:03:42.877030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:97608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.915 [2024-07-13 08:03:42.877141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:97624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.915 [2024-07-13 08:03:42.877541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:97632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:97656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.915 [2024-07-13 08:03:42.877752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.915 [2024-07-13 08:03:42.877788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.915 [2024-07-13 08:03:42.877824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.915 [2024-07-13 08:03:42.877881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.915 [2024-07-13 08:03:42.877920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:97704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.877978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:97712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.877993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.878015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.915 [2024-07-13 08:03:42.878031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.878052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.915 [2024-07-13 08:03:42.878066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:17.915 [2024-07-13 08:03:42.878088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:42.878103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:42.878152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:42.878192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:42.878248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:42.878293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:42.878339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:42.878385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:42.878442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:42.878488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:42.878533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:42.878579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:42.878625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:42.878679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:42.878726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:42.878772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:42.878845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:97800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:42.878892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.878919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:42.878937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:42.879544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:42.879578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.927976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:49.928080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.928198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:49.928219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.928241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:49.928257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.928278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:49.928293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.928314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:49.928330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.928351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:49.928366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.928387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:49.928416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.928436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:49.928451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.928487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:49.928518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.928569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:49.928583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.928604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:49.928618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.928638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:49.928652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.928673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:49.928706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.928729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.916 [2024-07-13 08:03:49.928743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.928966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:49.928990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.929015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:49.929030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.929053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.916 [2024-07-13 08:03:49.929067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:18:17.916 [2024-07-13 08:03:49.929118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.929148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.929212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.929246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.929280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.929314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.929348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.929399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.929461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.929500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.929536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.917 [2024-07-13 08:03:49.929573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.917 [2024-07-13 08:03:49.929610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.917 [2024-07-13 08:03:49.929646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.929683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.917 [2024-07-13 08:03:49.929719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.917 [2024-07-13 08:03:49.929759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.929797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.929835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.917 [2024-07-13 08:03:49.929891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.929938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.929962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.929978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.930016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.930052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.917 [2024-07-13 08:03:49.930089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.917 [2024-07-13 08:03:49.930125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.930178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.930216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.930253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.930289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.930325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.930363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.930400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.930446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.930483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.930521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.930557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.917 [2024-07-13 08:03:49.930594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.917 [2024-07-13 08:03:49.930631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.930668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.930705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.917 [2024-07-13 08:03:49.930742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:18:17.917 [2024-07-13 08:03:49.930764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.917 [2024-07-13 08:03:49.930790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.930815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.930830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.930852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.930867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.930896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.918 [2024-07-13 08:03:49.930911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.930934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.918 [2024-07-13 08:03:49.930948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.930974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.918 [2024-07-13 08:03:49.930990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.918 [2024-07-13 08:03:49.931063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.918 [2024-07-13 08:03:49.931214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.918 [2024-07-13 08:03:49.931593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.918 [2024-07-13 08:03:49.931630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.918 [2024-07-13 08:03:49.931706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:121520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.918 [2024-07-13 08:03:49.931876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.918 [2024-07-13 08:03:49.931913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.931972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:121560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.931992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.932014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.918 [2024-07-13 08:03:49.932029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.932051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.918 [2024-07-13 08:03:49.932065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.932087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.932102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.932124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.932138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.932160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.932174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.932196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.932211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.932233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.932247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.932269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.932287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.933164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.933195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:18:17.918 [2024-07-13 08:03:49.933231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.918 [2024-07-13 08:03:49.933249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.933279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.919 [2024-07-13 08:03:49.933295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.933325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.919 [2024-07-13 08:03:49.933340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.933371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:121600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:03:49.933386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.933417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.919 [2024-07-13 08:03:49.933432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.933462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:121616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:03:49.933477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.933507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.919 [2024-07-13 08:03:49.933522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.933552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.919 [2024-07-13 08:03:49.933566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.933596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:03:49.933612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.933641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:121648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:03:49.933656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.933685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:03:49.933700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.933740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.919 [2024-07-13 08:03:49.933756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.933800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.919 [2024-07-13 08:03:49.933818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.933865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.919 [2024-07-13 08:03:49.933885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.933916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.919 [2024-07-13 08:03:49.933932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.933962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:121696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:03:49.933979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.934010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.919 [2024-07-13 08:03:49.934026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.934056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:03:49.934071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.934101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:03:49.934116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.934156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.919 [2024-07-13 08:03:49.934174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.934205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.919 [2024-07-13 08:03:49.934220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:03:49.934250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:03:49.934265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.306556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.306613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.306662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.306680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.306695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.306708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.306723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.306736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.306751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.306765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.306780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:30528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.306793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.306857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:30536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.306873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.306888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:30544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.306917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.306947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:30552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.306960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.306975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:30576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.306989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.307004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:30592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.307018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.307033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.919 [2024-07-13 08:04:03.307046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.307061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.919 [2024-07-13 08:04:03.307074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.307089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.307111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.307127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.307140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.307155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.307168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.307183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.307199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.307215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:30000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.307229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.307244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:30008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.307258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.307289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:30016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.307303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.307319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:30072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.919 [2024-07-13 08:04:03.307333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.919 [2024-07-13 08:04:03.307348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.307361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:30632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.307391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:30640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.307420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:30648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.920 [2024-07-13 08:04:03.307449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:30656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.920 [2024-07-13 08:04:03.307478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.920 [2024-07-13 08:04:03.307514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:30672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.307558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:30680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.920 [2024-07-13 08:04:03.307586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:30688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.307613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.920 [2024-07-13 08:04:03.307641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:30704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.307669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:30712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.920 [2024-07-13 08:04:03.307698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:30720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.920 [2024-07-13 08:04:03.307728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:30728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.307756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:30736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.920 [2024-07-13 08:04:03.307784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:30744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.307811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:30752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.920 [2024-07-13 08:04:03.307851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:30760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.920 [2024-07-13 08:04:03.307912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:30768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.307964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.307980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.307994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.920 [2024-07-13 08:04:03.308023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:30104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.308051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:30112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.308081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:30120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.308110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:30128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.308139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:30184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.308168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:30240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.308211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.308239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:30256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.308268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:30792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.308296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:30800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.308330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:30808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.920 [2024-07-13 08:04:03.308359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:30816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.920 [2024-07-13 08:04:03.308404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:30824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.308432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:30832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.308461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:30840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.920 [2024-07-13 08:04:03.308489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.920 [2024-07-13 08:04:03.308505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.921 [2024-07-13 08:04:03.308518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.308534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:30856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.921 [2024-07-13 08:04:03.308547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.308563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:30864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.308576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.308591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:30872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.308605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.308620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:30880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.308633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.308649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.308662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.308677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:30896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.921 [2024-07-13 08:04:03.308691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.308711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:30904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.308726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.308742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:30912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.308756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.308772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.308785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.308800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.921 [2024-07-13 08:04:03.308814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.308830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:30264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.308843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.308870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:30272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.308885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.308910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:30280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.308924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.308939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:30320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.308953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.308968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:30328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.308981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.308997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:30344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.309012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:30360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.309040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:30368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.309075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.921 [2024-07-13 08:04:03.309114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:30944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.921 [2024-07-13 08:04:03.309145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:30952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.309374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:30960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.309409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.921 [2024-07-13 08:04:03.309440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:30976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.921 [2024-07-13 08:04:03.309471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:30984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.921 [2024-07-13 08:04:03.309500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.309529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:31000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.921 [2024-07-13 08:04:03.309558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:31008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.309590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:31016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.921 [2024-07-13 08:04:03.309621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:31024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.309650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:31032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.309679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:31040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.921 [2024-07-13 08:04:03.309717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:31048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.309747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:31056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.309790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.309822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:31072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.309852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:31080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.921 [2024-07-13 08:04:03.309880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:31088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.921 [2024-07-13 08:04:03.309909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.309938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:30384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.309967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.309986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:30392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.310001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.921 [2024-07-13 08:04:03.310017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:30424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.921 [2024-07-13 08:04:03.310030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:30448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:30456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:30472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:30480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:31096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.922 [2024-07-13 08:04:03.310200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:31104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.922 [2024-07-13 08:04:03.310228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:31112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:31120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:31128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:31136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:31144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:31168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:31176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:31184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.922 [2024-07-13 08:04:03.310529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:31192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.922 [2024-07-13 08:04:03.310558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:31200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:31216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:17.922 [2024-07-13 08:04:03.310648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:30496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:30512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:30520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:30560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:30568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:30584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:17.922 [2024-07-13 08:04:03.310864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310885] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2100dd0 is same with the state(5) to be set 00:18:17.922 [2024-07-13 08:04:03.310903] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:17.922 [2024-07-13 08:04:03.310914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:17.922 [2024-07-13 08:04:03.310925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:30600 len:8 PRP1 0x0 PRP2 0x0 00:18:17.922 [2024-07-13 08:04:03.310938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.310984] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2100dd0 was disconnected and freed. reset controller. 00:18:17.922 [2024-07-13 08:04:03.311075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.922 [2024-07-13 08:04:03.311101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.311116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.922 [2024-07-13 08:04:03.311130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.311144] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.922 [2024-07-13 08:04:03.311158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.311172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:17.922 [2024-07-13 08:04:03.311187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:17.922 [2024-07-13 08:04:03.311201] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210e660 is same with the state(5) to be set 00:18:17.922 [2024-07-13 08:04:03.312347] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:17.922 [2024-07-13 08:04:03.312386] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210e660 (9): Bad file descriptor 00:18:17.922 [2024-07-13 08:04:03.312683] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.922 [2024-07-13 08:04:03.312771] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.922 [2024-07-13 08:04:03.312837] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:17.922 [2024-07-13 08:04:03.312873] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x210e660 with addr=10.0.0.2, port=4421 00:18:17.922 [2024-07-13 08:04:03.312892] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x210e660 is same with the state(5) to be set 00:18:17.922 [2024-07-13 08:04:03.312928] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x210e660 (9): Bad file descriptor 00:18:17.922 [2024-07-13 08:04:03.312961] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:17.922 [2024-07-13 08:04:03.312992] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:17.922 [2024-07-13 08:04:03.313006] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:17.922 [2024-07-13 08:04:03.313221] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:17.922 [2024-07-13 08:04:03.313277] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:17.922 [2024-07-13 08:04:13.376575] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:17.922 Received shutdown signal, test time was about 55.370551 seconds 00:18:17.922 00:18:17.922 Latency(us) 00:18:17.922 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.922 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:17.922 Verification LBA range: start 0x0 length 0x4000 00:18:17.922 Nvme0n1 : 55.37 9763.05 38.14 0.00 0.00 13088.24 286.72 7046430.72 00:18:17.922 =================================================================================================================== 00:18:17.922 Total : 9763.05 38.14 0.00 0.00 13088.24 286.72 7046430.72 00:18:17.923 08:04:23 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:18.181 08:04:23 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:18:18.181 08:04:23 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:18:18.181 08:04:23 -- host/multipath.sh@125 -- # nvmftestfini 00:18:18.181 08:04:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:18.181 08:04:23 -- nvmf/common.sh@116 -- # sync 00:18:18.181 08:04:23 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:18:18.181 08:04:23 -- nvmf/common.sh@119 -- # set +e 00:18:18.181 08:04:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:18.181 08:04:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:18:18.181 rmmod nvme_tcp 00:18:18.181 rmmod nvme_fabrics 00:18:18.181 rmmod nvme_keyring 00:18:18.441 08:04:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:18.441 08:04:24 -- nvmf/common.sh@123 -- # set -e 00:18:18.441 08:04:24 -- nvmf/common.sh@124 -- # return 0 00:18:18.441 08:04:24 -- nvmf/common.sh@477 -- # '[' -n 79670 ']' 00:18:18.441 08:04:24 -- nvmf/common.sh@478 -- # killprocess 79670 00:18:18.441 08:04:24 -- common/autotest_common.sh@926 -- # '[' -z 79670 ']' 00:18:18.441 08:04:24 -- common/autotest_common.sh@930 -- # kill -0 79670 00:18:18.441 08:04:24 -- common/autotest_common.sh@931 -- # uname 00:18:18.441 08:04:24 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:18.441 08:04:24 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 79670 00:18:18.441 killing process with pid 79670 00:18:18.441 08:04:24 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:18:18.441 08:04:24 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:18:18.441 08:04:24 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 79670' 00:18:18.441 08:04:24 -- common/autotest_common.sh@945 -- # kill 79670 00:18:18.441 08:04:24 -- common/autotest_common.sh@950 -- # wait 79670 00:18:18.441 08:04:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:18.441 08:04:24 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:18:18.441 08:04:24 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:18:18.441 08:04:24 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:18.441 08:04:24 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:18:18.441 08:04:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.441 08:04:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.441 08:04:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.441 08:04:24 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:18:18.441 00:18:18.441 real 1m0.957s 00:18:18.441 user 2m47.949s 00:18:18.441 sys 0m18.968s 00:18:18.441 08:04:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:18.441 08:04:24 -- common/autotest_common.sh@10 -- # set +x 00:18:18.441 ************************************ 00:18:18.441 END TEST nvmf_multipath 00:18:18.441 ************************************ 00:18:18.700 08:04:24 -- nvmf/nvmf.sh@117 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:18.700 08:04:24 -- common/autotest_common.sh@1077 -- # '[' 3 -le 1 ']' 00:18:18.700 08:04:24 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:18:18.700 08:04:24 -- common/autotest_common.sh@10 -- # set +x 00:18:18.700 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:18:18.700 ************************************ 00:18:18.700 START TEST nvmf_timeout 00:18:18.700 ************************************ 00:18:18.700 08:04:24 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:18:18.700 * Looking for test storage... 00:18:18.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:18:18.701 08:04:24 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:18.701 08:04:24 -- nvmf/common.sh@7 -- # uname -s 00:18:18.701 08:04:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.701 08:04:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.701 08:04:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.701 08:04:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.701 08:04:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.701 08:04:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.701 08:04:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.701 08:04:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.701 08:04:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.701 08:04:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.701 08:04:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:18:18.701 08:04:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:18:18.701 08:04:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.701 08:04:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.701 08:04:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:18.701 08:04:24 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:18.701 08:04:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.701 08:04:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.701 08:04:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.701 08:04:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.701 08:04:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.701 08:04:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.701 08:04:24 -- paths/export.sh@5 -- # export PATH 00:18:18.701 08:04:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.701 08:04:24 -- nvmf/common.sh@46 -- # : 0 00:18:18.701 08:04:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:18.701 08:04:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:18.701 08:04:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:18.701 08:04:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.701 08:04:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.701 08:04:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:18.701 08:04:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:18.701 08:04:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:18.701 08:04:24 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:18.701 08:04:24 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:18.701 08:04:24 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:18.701 08:04:24 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:18:18.701 08:04:24 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:18.701 08:04:24 -- host/timeout.sh@19 -- # nvmftestinit 00:18:18.701 08:04:24 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:18:18.701 08:04:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.701 08:04:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:18.701 08:04:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:18.701 08:04:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:18.701 08:04:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.701 08:04:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:18.701 08:04:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.701 08:04:24 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:18:18.701 08:04:24 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:18:18.701 08:04:24 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:18:18.701 08:04:24 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:18:18.701 08:04:24 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:18:18.701 08:04:24 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:18:18.701 08:04:24 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:18.701 08:04:24 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:18.701 08:04:24 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:18:18.701 08:04:24 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:18:18.701 08:04:24 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:18.701 08:04:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:18.701 08:04:24 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:18.701 08:04:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:18.701 08:04:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:18.701 08:04:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:18.701 08:04:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:18.701 08:04:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:18.701 08:04:24 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:18:18.701 08:04:24 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:18:18.701 Cannot find device "nvmf_tgt_br" 00:18:18.701 08:04:24 -- nvmf/common.sh@154 -- # true 00:18:18.701 08:04:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:18:18.701 Cannot find device "nvmf_tgt_br2" 00:18:18.701 08:04:24 -- nvmf/common.sh@155 -- # true 00:18:18.701 08:04:24 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:18:18.701 08:04:24 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:18:18.701 Cannot find device "nvmf_tgt_br" 00:18:18.701 08:04:24 -- nvmf/common.sh@157 -- # true 00:18:18.701 08:04:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:18:18.701 Cannot find device "nvmf_tgt_br2" 00:18:18.701 08:04:24 -- nvmf/common.sh@158 -- # true 00:18:18.701 08:04:24 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:18:18.961 08:04:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:18:18.961 08:04:24 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:18.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:18.961 08:04:24 -- nvmf/common.sh@161 -- # true 00:18:18.961 08:04:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:18.961 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:18.961 08:04:24 -- nvmf/common.sh@162 -- # true 00:18:18.961 08:04:24 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:18:18.961 08:04:24 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:18.961 08:04:24 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:18.961 08:04:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:18.961 08:04:24 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:18.961 08:04:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:18.961 08:04:24 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:18.961 08:04:24 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:18:18.961 08:04:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:18:18.961 08:04:24 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:18:18.961 08:04:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:18:18.961 08:04:24 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:18:18.961 08:04:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:18:18.961 08:04:24 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:18.961 08:04:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:18.961 08:04:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:18.961 08:04:24 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:18:18.961 08:04:24 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:18:18.961 08:04:24 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:18:18.961 08:04:24 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:18.961 08:04:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:18.961 08:04:24 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:18.961 08:04:24 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:18.961 08:04:24 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:18:18.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:18.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:18:18.961 00:18:18.961 --- 10.0.0.2 ping statistics --- 00:18:18.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.961 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:18:18.961 08:04:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:18:18.961 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:18.961 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:18:18.961 00:18:18.961 --- 10.0.0.3 ping statistics --- 00:18:18.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.961 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:18:18.961 08:04:24 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:18.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:18.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:18:18.961 00:18:18.961 --- 10.0.0.1 ping statistics --- 00:18:18.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:18.961 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:18:18.961 08:04:24 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:18.961 08:04:24 -- nvmf/common.sh@421 -- # return 0 00:18:18.961 08:04:24 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:18.961 08:04:24 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:18.961 08:04:24 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:18:18.961 08:04:24 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:18:18.961 08:04:24 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:18.961 08:04:24 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:18:18.961 08:04:24 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:18:18.961 08:04:24 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:18:18.961 08:04:24 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:18.961 08:04:24 -- common/autotest_common.sh@712 -- # xtrace_disable 00:18:18.961 08:04:24 -- common/autotest_common.sh@10 -- # set +x 00:18:18.961 08:04:24 -- nvmf/common.sh@469 -- # nvmfpid=80483 00:18:18.961 08:04:24 -- nvmf/common.sh@470 -- # waitforlisten 80483 00:18:18.961 08:04:24 -- common/autotest_common.sh@819 -- # '[' -z 80483 ']' 00:18:18.961 08:04:24 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:18:18.961 08:04:24 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.961 08:04:24 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:18.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.961 08:04:24 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.961 08:04:24 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:18.961 08:04:24 -- common/autotest_common.sh@10 -- # set +x 00:18:19.221 [2024-07-13 08:04:24.801682] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:19.221 [2024-07-13 08:04:24.801831] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:19.221 [2024-07-13 08:04:24.942402] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:19.221 [2024-07-13 08:04:24.986244] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:19.221 [2024-07-13 08:04:24.986411] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:19.221 [2024-07-13 08:04:24.986428] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:19.221 [2024-07-13 08:04:24.986439] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:19.221 [2024-07-13 08:04:24.986539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.221 [2024-07-13 08:04:24.986554] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.175 08:04:25 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:20.175 08:04:25 -- common/autotest_common.sh@852 -- # return 0 00:18:20.175 08:04:25 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:20.175 08:04:25 -- common/autotest_common.sh@718 -- # xtrace_disable 00:18:20.175 08:04:25 -- common/autotest_common.sh@10 -- # set +x 00:18:20.175 08:04:25 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:20.175 08:04:25 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:20.175 08:04:25 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:20.434 [2024-07-13 08:04:26.102408] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:20.434 08:04:26 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:18:20.694 Malloc0 00:18:20.694 08:04:26 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:20.954 08:04:26 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:21.213 08:04:26 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:21.473 [2024-07-13 08:04:27.107454] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:21.473 08:04:27 -- host/timeout.sh@32 -- # bdevperf_pid=80520 00:18:21.473 08:04:27 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:21.473 08:04:27 -- host/timeout.sh@34 -- # waitforlisten 80520 /var/tmp/bdevperf.sock 00:18:21.473 08:04:27 -- common/autotest_common.sh@819 -- # '[' -z 80520 ']' 00:18:21.473 08:04:27 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:21.473 08:04:27 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:21.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:21.473 08:04:27 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:21.473 08:04:27 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:21.473 08:04:27 -- common/autotest_common.sh@10 -- # set +x 00:18:21.473 [2024-07-13 08:04:27.186096] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:21.473 [2024-07-13 08:04:27.186217] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80520 ] 00:18:21.732 [2024-07-13 08:04:27.325874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.732 [2024-07-13 08:04:27.369030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:22.669 08:04:28 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:22.669 08:04:28 -- common/autotest_common.sh@852 -- # return 0 00:18:22.669 08:04:28 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:22.669 08:04:28 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:22.928 NVMe0n1 00:18:22.928 08:04:28 -- host/timeout.sh@51 -- # rpc_pid=80537 00:18:22.928 08:04:28 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:22.928 08:04:28 -- host/timeout.sh@53 -- # sleep 1 00:18:23.186 Running I/O for 10 seconds... 00:18:24.124 08:04:29 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:24.385 [2024-07-13 08:04:29.963957] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1474970 is same with the state(5) to be set 00:18:24.385 [2024-07-13 08:04:29.964013] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1474970 is same with the state(5) to be set 00:18:24.385 [2024-07-13 08:04:29.964026] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1474970 is same with the state(5) to be set 00:18:24.385 [2024-07-13 08:04:29.964035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1474970 is same with the state(5) to be set 00:18:24.385 [2024-07-13 08:04:29.964044] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1474970 is same with the state(5) to be set 00:18:24.385 [2024-07-13 08:04:29.964052] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1474970 is same with the state(5) to be set 00:18:24.385 [2024-07-13 08:04:29.964061] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1474970 is same with the state(5) to be set 00:18:24.385 [2024-07-13 08:04:29.964069] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1474970 is same with the state(5) to be set 00:18:24.385 [2024-07-13 08:04:29.964078] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1474970 is same with the state(5) to be set 00:18:24.385 [2024-07-13 08:04:29.964087] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1474970 is same with the state(5) to be set 00:18:24.385 [2024-07-13 08:04:29.964095] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1474970 is same with the state(5) to be set 00:18:24.385 [2024-07-13 08:04:29.964156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.385 [2024-07-13 08:04:29.964189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.385 [2024-07-13 08:04:29.964212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.385 [2024-07-13 08:04:29.964224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.385 [2024-07-13 08:04:29.964236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.385 [2024-07-13 08:04:29.964245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.385 [2024-07-13 08:04:29.964267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.385 [2024-07-13 08:04:29.964276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.385 [2024-07-13 08:04:29.964288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.385 [2024-07-13 08:04:29.964297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.385 [2024-07-13 08:04:29.964308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.385 [2024-07-13 08:04:29.964317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.385 [2024-07-13 08:04:29.964333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.385 [2024-07-13 08:04:29.964663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.385 [2024-07-13 08:04:29.964695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.385 [2024-07-13 08:04:29.964708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.385 [2024-07-13 08:04:29.964721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.385 [2024-07-13 08:04:29.964731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.385 [2024-07-13 08:04:29.964744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.385 [2024-07-13 08:04:29.964755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.385 [2024-07-13 08:04:29.964767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.385 [2024-07-13 08:04:29.964794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.385 [2024-07-13 08:04:29.965085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.385 [2024-07-13 08:04:29.965353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.385 [2024-07-13 08:04:29.965372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.385 [2024-07-13 08:04:29.965383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.385 [2024-07-13 08:04:29.965395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.385 [2024-07-13 08:04:29.965405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.385 [2024-07-13 08:04:29.965416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.385 [2024-07-13 08:04:29.965426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.385 [2024-07-13 08:04:29.965437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.965735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.965764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.965790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.965806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.965817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.965829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.965838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.965850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.965859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.965871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.965881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.966235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.966264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.966278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.966289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.966301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.966310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.966322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.966331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.966343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.386 [2024-07-13 08:04:29.966352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.966367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.386 [2024-07-13 08:04:29.966630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.966646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.966657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.966669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.966679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.966690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.386 [2024-07-13 08:04:29.966700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.966711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.966721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.967033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.967059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.967072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.967082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.967094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.967104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.967116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.967125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.967137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.967285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.967402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.386 [2024-07-13 08:04:29.967417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.967429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.386 [2024-07-13 08:04:29.967438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.967451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.386 [2024-07-13 08:04:29.967460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.967471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.967741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.967783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.386 [2024-07-13 08:04:29.967797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.967809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.386 [2024-07-13 08:04:29.967819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.967831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.386 [2024-07-13 08:04:29.967840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.967853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.967862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.967874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.386 [2024-07-13 08:04:29.968012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.968034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.968324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.968344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.968354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.968365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.968375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.968388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.968632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.968650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.968661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.968672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.968682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.968694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.969003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.969035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.969046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.969058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.969070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.969082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.969091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.969102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.386 [2024-07-13 08:04:29.969241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.969356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.386 [2024-07-13 08:04:29.969368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.386 [2024-07-13 08:04:29.969380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.969389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.969401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.969411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.969426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.969560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.969577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.969848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.969867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.969878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.969890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.969899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.969911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.969921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.970074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.970209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.970226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.970237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.970249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.970259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.970271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.970280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.970292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.970301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.970442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.970555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.970570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.970580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.970592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.970601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.970614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.970624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.970638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.970879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.970896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.970908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.970920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.970929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.970941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.970950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.971091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.971204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.971220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.971231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.971242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.971253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.971264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.971273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.971286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.971534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.971562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.971575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.971587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.971596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.971608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.971618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.971629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.971639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.971904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.971927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.971940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.971951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.971963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.971973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.971984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.971994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.972168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.972192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.972455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.972467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.972479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.972489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.972502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.972511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.972860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.972889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.972902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.972913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.972924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.387 [2024-07-13 08:04:29.972934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.972945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.972955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.972967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.972977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.387 [2024-07-13 08:04:29.973146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.387 [2024-07-13 08:04:29.973274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.973289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.388 [2024-07-13 08:04:29.973300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.973311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.973321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.973333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.973343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.973356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.973587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.973605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.973619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.973638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.973952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.973985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.973997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.974008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.974018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.974029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.974039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.974050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.974313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.974342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.974354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.974367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.974380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.974392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.974402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.974414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.974653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.974681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.974692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.974704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.388 [2024-07-13 08:04:29.974713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.974725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.974735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.974746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.974758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.974980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:24.388 [2024-07-13 08:04:29.974995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.975008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.975018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.975033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.975349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.975378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.975391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.975403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.975413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.975424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.975434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.975446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.975689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.975716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.975727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.975739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:24.388 [2024-07-13 08:04:29.975749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.975761] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23452f0 is same with the state(5) to be set 00:18:24.388 [2024-07-13 08:04:29.975790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:24.388 [2024-07-13 08:04:29.976062] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:24.388 [2024-07-13 08:04:29.976076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109104 len:8 PRP1 0x0 PRP2 0x0 00:18:24.388 [2024-07-13 08:04:29.976086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.976131] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23452f0 was disconnected and freed. reset controller. 00:18:24.388 [2024-07-13 08:04:29.976552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.388 [2024-07-13 08:04:29.976582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.976595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.388 [2024-07-13 08:04:29.976605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.976615] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.388 [2024-07-13 08:04:29.976624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.976634] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:24.388 [2024-07-13 08:04:29.976643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:24.388 [2024-07-13 08:04:29.976909] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233abe0 is same with the state(5) to be set 00:18:24.388 [2024-07-13 08:04:29.977347] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:24.388 [2024-07-13 08:04:29.977387] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233abe0 (9): Bad file descriptor 00:18:24.388 [2024-07-13 08:04:29.977715] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:24.388 [2024-07-13 08:04:29.977815] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:24.388 [2024-07-13 08:04:29.978164] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:24.388 [2024-07-13 08:04:29.978197] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x233abe0 with addr=10.0.0.2, port=4420 00:18:24.388 [2024-07-13 08:04:29.978210] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233abe0 is same with the state(5) to be set 00:18:24.388 [2024-07-13 08:04:29.978234] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233abe0 (9): Bad file descriptor 00:18:24.388 [2024-07-13 08:04:29.978254] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:24.388 [2024-07-13 08:04:29.978265] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:24.388 [2024-07-13 08:04:29.978277] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:24.388 [2024-07-13 08:04:29.978556] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:24.388 [2024-07-13 08:04:29.978573] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:24.388 08:04:29 -- host/timeout.sh@56 -- # sleep 2 00:18:26.297 [2024-07-13 08:04:31.978701] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:26.297 [2024-07-13 08:04:31.978825] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:26.297 [2024-07-13 08:04:31.978874] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:26.297 [2024-07-13 08:04:31.978892] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x233abe0 with addr=10.0.0.2, port=4420 00:18:26.297 [2024-07-13 08:04:31.978919] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233abe0 is same with the state(5) to be set 00:18:26.297 [2024-07-13 08:04:31.978949] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233abe0 (9): Bad file descriptor 00:18:26.297 [2024-07-13 08:04:31.978970] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:26.297 [2024-07-13 08:04:31.978980] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:26.297 [2024-07-13 08:04:31.978991] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:26.297 [2024-07-13 08:04:31.979052] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:26.297 [2024-07-13 08:04:31.979072] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:26.297 08:04:31 -- host/timeout.sh@57 -- # get_controller 00:18:26.297 08:04:31 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:26.297 08:04:31 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:26.556 08:04:32 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:18:26.556 08:04:32 -- host/timeout.sh@58 -- # get_bdev 00:18:26.556 08:04:32 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:26.556 08:04:32 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:26.814 08:04:32 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:18:26.814 08:04:32 -- host/timeout.sh@61 -- # sleep 5 00:18:28.188 [2024-07-13 08:04:33.979662] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.188 [2024-07-13 08:04:33.979780] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.188 [2024-07-13 08:04:33.979842] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:28.188 [2024-07-13 08:04:33.979862] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x233abe0 with addr=10.0.0.2, port=4420 00:18:28.188 [2024-07-13 08:04:33.979876] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x233abe0 is same with the state(5) to be set 00:18:28.188 [2024-07-13 08:04:33.979905] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233abe0 (9): Bad file descriptor 00:18:28.188 [2024-07-13 08:04:33.979926] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:28.188 [2024-07-13 08:04:33.979937] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:28.188 [2024-07-13 08:04:33.979948] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:28.188 [2024-07-13 08:04:33.979990] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:28.188 [2024-07-13 08:04:33.980002] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:30.722 [2024-07-13 08:04:35.980066] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:30.722 [2024-07-13 08:04:35.980155] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:30.722 [2024-07-13 08:04:35.980184] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:30.722 [2024-07-13 08:04:35.980194] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:18:30.722 [2024-07-13 08:04:35.980222] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:31.289 00:18:31.289 Latency(us) 00:18:31.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:31.289 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:31.289 Verification LBA range: start 0x0 length 0x4000 00:18:31.289 NVMe0n1 : 8.15 1666.58 6.51 15.70 0.00 76171.69 3217.22 7046430.72 00:18:31.289 =================================================================================================================== 00:18:31.289 Total : 1666.58 6.51 15.70 0.00 76171.69 3217.22 7046430.72 00:18:31.289 0 00:18:31.854 08:04:37 -- host/timeout.sh@62 -- # get_controller 00:18:31.854 08:04:37 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:31.854 08:04:37 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:18:32.112 08:04:37 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:18:32.112 08:04:37 -- host/timeout.sh@63 -- # get_bdev 00:18:32.112 08:04:37 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:18:32.112 08:04:37 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:18:32.369 08:04:38 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:18:32.369 08:04:38 -- host/timeout.sh@65 -- # wait 80537 00:18:32.369 08:04:38 -- host/timeout.sh@67 -- # killprocess 80520 00:18:32.369 08:04:38 -- common/autotest_common.sh@926 -- # '[' -z 80520 ']' 00:18:32.369 08:04:38 -- common/autotest_common.sh@930 -- # kill -0 80520 00:18:32.369 08:04:38 -- common/autotest_common.sh@931 -- # uname 00:18:32.369 08:04:38 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:32.369 08:04:38 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80520 00:18:32.369 08:04:38 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:32.369 08:04:38 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:32.369 killing process with pid 80520 00:18:32.369 08:04:38 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80520' 00:18:32.369 Received shutdown signal, test time was about 9.213365 seconds 00:18:32.369 00:18:32.369 Latency(us) 00:18:32.369 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:32.370 =================================================================================================================== 00:18:32.370 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:32.370 08:04:38 -- common/autotest_common.sh@945 -- # kill 80520 00:18:32.370 08:04:38 -- common/autotest_common.sh@950 -- # wait 80520 00:18:32.370 08:04:38 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:32.628 [2024-07-13 08:04:38.441753] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:32.886 08:04:38 -- host/timeout.sh@74 -- # bdevperf_pid=80599 00:18:32.886 08:04:38 -- host/timeout.sh@76 -- # waitforlisten 80599 /var/tmp/bdevperf.sock 00:18:32.886 08:04:38 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:18:32.886 08:04:38 -- common/autotest_common.sh@819 -- # '[' -z 80599 ']' 00:18:32.886 08:04:38 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.886 08:04:38 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:32.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.886 08:04:38 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.886 08:04:38 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:32.886 08:04:38 -- common/autotest_common.sh@10 -- # set +x 00:18:32.886 [2024-07-13 08:04:38.507764] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:32.886 [2024-07-13 08:04:38.507881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80599 ] 00:18:32.886 [2024-07-13 08:04:38.648874] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.886 [2024-07-13 08:04:38.687480] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:33.820 08:04:39 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:33.820 08:04:39 -- common/autotest_common.sh@852 -- # return 0 00:18:33.820 08:04:39 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:18:34.078 08:04:39 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:18:34.336 NVMe0n1 00:18:34.336 08:04:40 -- host/timeout.sh@84 -- # rpc_pid=80611 00:18:34.336 08:04:40 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:34.336 08:04:40 -- host/timeout.sh@86 -- # sleep 1 00:18:34.336 Running I/O for 10 seconds... 00:18:35.272 08:04:41 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:35.535 [2024-07-13 08:04:41.285423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148ae50 is same with the state(5) to be set 00:18:35.535 [2024-07-13 08:04:41.285477] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148ae50 is same with the state(5) to be set 00:18:35.535 [2024-07-13 08:04:41.285490] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148ae50 is same with the state(5) to be set 00:18:35.535 [2024-07-13 08:04:41.285499] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148ae50 is same with the state(5) to be set 00:18:35.535 [2024-07-13 08:04:41.285507] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148ae50 is same with the state(5) to be set 00:18:35.535 [2024-07-13 08:04:41.285516] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148ae50 is same with the state(5) to be set 00:18:35.535 [2024-07-13 08:04:41.285524] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148ae50 is same with the state(5) to be set 00:18:35.535 [2024-07-13 08:04:41.285533] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148ae50 is same with the state(5) to be set 00:18:35.535 [2024-07-13 08:04:41.285541] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148ae50 is same with the state(5) to be set 00:18:35.535 [2024-07-13 08:04:41.285549] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148ae50 is same with the state(5) to be set 00:18:35.535 [2024-07-13 08:04:41.285558] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x148ae50 is same with the state(5) to be set 00:18:35.535 [2024-07-13 08:04:41.285619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.535 [2024-07-13 08:04:41.285650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.535 [2024-07-13 08:04:41.286169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:109056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.535 [2024-07-13 08:04:41.286201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.535 [2024-07-13 08:04:41.286230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:108384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.535 [2024-07-13 08:04:41.286241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.535 [2024-07-13 08:04:41.286252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:108392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.535 [2024-07-13 08:04:41.286262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.535 [2024-07-13 08:04:41.286273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:108408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.535 [2024-07-13 08:04:41.286283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.535 [2024-07-13 08:04:41.286294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:108416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.535 [2024-07-13 08:04:41.286303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.286314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:108424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.286324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.286613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:108440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.286685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.286701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:108448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.286710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.286722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:108456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.286731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.286744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:109112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.286753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.286765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.286789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.286804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:109128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.536 [2024-07-13 08:04:41.286923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.286938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.286948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.286960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:109144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.287210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.287227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:109152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.287236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.287248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.287257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.287286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:108464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.287296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.287307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:108472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.287316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.287327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:108504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.287337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.287649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:108512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.287687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.287701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:108544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.287711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.287722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.287731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.287743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:108608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.287752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.287764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:108624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.287773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.287784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:109168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.536 [2024-07-13 08:04:41.287956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.288210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:109176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.536 [2024-07-13 08:04:41.288226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.288239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:109184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.288248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.288404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:109192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.288476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.288490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:109200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.536 [2024-07-13 08:04:41.288499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.288511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:109208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.288521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.288533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.288557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.288568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:109224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.288594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.288620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.288882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.288905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.288915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.288926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.288935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.288947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:109256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.536 [2024-07-13 08:04:41.288956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.288967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:109264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.536 [2024-07-13 08:04:41.288976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.288987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:109272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.536 [2024-07-13 08:04:41.288996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.289115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.289129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.289447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:109288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.536 [2024-07-13 08:04:41.289488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.289503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.536 [2024-07-13 08:04:41.289512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.289523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:109304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.536 [2024-07-13 08:04:41.289533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.289544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.289553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.289564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:109320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.536 [2024-07-13 08:04:41.289573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.289584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:109328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.289592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.289859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.289871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.289883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:108640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.536 [2024-07-13 08:04:41.289892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.536 [2024-07-13 08:04:41.289904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:108664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.289914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.289926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:108688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.289935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.289946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:108696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.290068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.290084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:108704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.290094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.290106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:108728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.290115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.290127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:108744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.290264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.290395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:108784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.290414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.290673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:109344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.290694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.290962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:109352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.290986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.291000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:109360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.291010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.291327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.291339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.291356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:109376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.291366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.291378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:109384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.291387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.291398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:109392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.291407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.291419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:109400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.291684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.291920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:109408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.291934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.291946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:109416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.291957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.291969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.291979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.292249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:109432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.292269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.292282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:109440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.292292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.292303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:109448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.292313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.292324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:109456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.292333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.292345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:109464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.292354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.292365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:109472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.292487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.292503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:108800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.292660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.292679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:108808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.292830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.293130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:108840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.293238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.293254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:108880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.293265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.293276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:108888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.293286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.293297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:108920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.293307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.293319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:108928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.293328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.293471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:108936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.293762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.293872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:109480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.293885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.293898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:109488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.293907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.293919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:109496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.293928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.293939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:109504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.293949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.294068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:109512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.294084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.294097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:109520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.294366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.294382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.294393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.294404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:109536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.294413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.294668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:109544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.537 [2024-07-13 08:04:41.294682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.294694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:109552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.294704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.537 [2024-07-13 08:04:41.294715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:109560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.537 [2024-07-13 08:04:41.295111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.295128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.295139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.295151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:109576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.538 [2024-07-13 08:04:41.295160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.295171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.295181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.295192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:109592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.538 [2024-07-13 08:04:41.295201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.295487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:109600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.295596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.295612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.295622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.295634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:109616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.538 [2024-07-13 08:04:41.295644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.295656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:109624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.538 [2024-07-13 08:04:41.295665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.295676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:109632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.538 [2024-07-13 08:04:41.295685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.295942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:109640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.538 [2024-07-13 08:04:41.295956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.295968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:108944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.295977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.295989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:108952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.295999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.296010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:108968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.296019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.296135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:108976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.296153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.296166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:108992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.296596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.296613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:109000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.296623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.296636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:109008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.296645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.296657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.296666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.296677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:109648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.296686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.296830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:109656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.296938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.296955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:109664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.296965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.296976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:109672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.296985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.296997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.297006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.297018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.297137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.297155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:109696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.538 [2024-07-13 08:04:41.297499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.297576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.297588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.297599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:109712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.297608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.297620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:109720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:35.538 [2024-07-13 08:04:41.297630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.297641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:109728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.297650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.297661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:109032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.297671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.297900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.297914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.297925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:109064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.297935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.297946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:109072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.297956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.297967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.297976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.298081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.298098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.298110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:35.538 [2024-07-13 08:04:41.298120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.298131] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e2e90 is same with the state(5) to be set 00:18:35.538 [2024-07-13 08:04:41.298144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:35.538 [2024-07-13 08:04:41.298162] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:35.538 [2024-07-13 08:04:41.298404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:109104 len:8 PRP1 0x0 PRP2 0x0 00:18:35.538 [2024-07-13 08:04:41.298416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.298461] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23e2e90 was disconnected and freed. reset controller. 00:18:35.538 [2024-07-13 08:04:41.298822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.538 [2024-07-13 08:04:41.298846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.298875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.538 [2024-07-13 08:04:41.298884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.298894] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.538 [2024-07-13 08:04:41.298903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.538 [2024-07-13 08:04:41.298914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:35.538 [2024-07-13 08:04:41.298924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:35.539 [2024-07-13 08:04:41.298933] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4a20 is same with the state(5) to be set 00:18:35.539 [2024-07-13 08:04:41.299443] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.539 [2024-07-13 08:04:41.299480] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b4a20 (9): Bad file descriptor 00:18:35.539 [2024-07-13 08:04:41.299680] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.539 [2024-07-13 08:04:41.299832] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.539 [2024-07-13 08:04:41.300177] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:35.539 [2024-07-13 08:04:41.300208] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4a20 with addr=10.0.0.2, port=4420 00:18:35.539 [2024-07-13 08:04:41.300221] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4a20 is same with the state(5) to be set 00:18:35.539 [2024-07-13 08:04:41.300244] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b4a20 (9): Bad file descriptor 00:18:35.539 [2024-07-13 08:04:41.300261] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:35.539 [2024-07-13 08:04:41.300271] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:35.539 [2024-07-13 08:04:41.300282] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:35.539 [2024-07-13 08:04:41.300303] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:35.539 [2024-07-13 08:04:41.300485] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:35.539 08:04:41 -- host/timeout.sh@90 -- # sleep 1 00:18:36.556 [2024-07-13 08:04:42.300753] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:36.556 [2024-07-13 08:04:42.300877] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:36.556 [2024-07-13 08:04:42.300939] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:36.556 [2024-07-13 08:04:42.300972] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4a20 with addr=10.0.0.2, port=4420 00:18:36.556 [2024-07-13 08:04:42.301000] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4a20 is same with the state(5) to be set 00:18:36.556 [2024-07-13 08:04:42.301027] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b4a20 (9): Bad file descriptor 00:18:36.556 [2024-07-13 08:04:42.301046] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:36.556 [2024-07-13 08:04:42.301056] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:36.556 [2024-07-13 08:04:42.301378] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:36.556 [2024-07-13 08:04:42.301428] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:36.556 [2024-07-13 08:04:42.301440] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:36.556 08:04:42 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:36.814 [2024-07-13 08:04:42.555226] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:36.814 08:04:42 -- host/timeout.sh@92 -- # wait 80611 00:18:37.747 [2024-07-13 08:04:43.316419] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:45.865 00:18:45.865 Latency(us) 00:18:45.865 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:45.865 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:45.865 Verification LBA range: start 0x0 length 0x4000 00:18:45.865 NVMe0n1 : 10.01 8396.59 32.80 0.00 0.00 15223.92 878.78 3035150.89 00:18:45.865 =================================================================================================================== 00:18:45.865 Total : 8396.59 32.80 0.00 0.00 15223.92 878.78 3035150.89 00:18:45.865 0 00:18:45.865 08:04:50 -- host/timeout.sh@97 -- # rpc_pid=80661 00:18:45.865 08:04:50 -- host/timeout.sh@98 -- # sleep 1 00:18:45.865 08:04:50 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:45.865 Running I/O for 10 seconds... 00:18:45.865 08:04:51 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:45.865 [2024-07-13 08:04:51.424259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424330] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424339] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424348] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424356] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424364] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424372] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424381] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424389] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424398] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424406] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424414] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424423] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424431] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424439] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424448] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424456] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424464] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424472] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424480] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424488] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14795e0 is same with the state(5) to be set 00:18:45.865 [2024-07-13 08:04:51.424946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:110024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.865 [2024-07-13 08:04:51.424987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.865 [2024-07-13 08:04:51.425010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:110048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.865 [2024-07-13 08:04:51.425021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.865 [2024-07-13 08:04:51.425033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:110064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.865 [2024-07-13 08:04:51.425043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.865 [2024-07-13 08:04:51.425055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:110072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.865 [2024-07-13 08:04:51.425065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.865 [2024-07-13 08:04:51.425077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:110080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.865 [2024-07-13 08:04:51.425086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.865 [2024-07-13 08:04:51.425098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:110112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.865 [2024-07-13 08:04:51.425408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.865 [2024-07-13 08:04:51.425427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:110120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.865 [2024-07-13 08:04:51.425437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.865 [2024-07-13 08:04:51.425449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.865 [2024-07-13 08:04:51.425458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.865 [2024-07-13 08:04:51.425470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:109432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.865 [2024-07-13 08:04:51.425480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.865 [2024-07-13 08:04:51.425491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:109448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.865 [2024-07-13 08:04:51.425500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.865 [2024-07-13 08:04:51.425512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:109456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.865 [2024-07-13 08:04:51.425521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.865 [2024-07-13 08:04:51.425999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:109464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.865 [2024-07-13 08:04:51.426028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.865 [2024-07-13 08:04:51.426042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:109472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.865 [2024-07-13 08:04:51.426052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.865 [2024-07-13 08:04:51.426064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:109480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.865 [2024-07-13 08:04:51.426074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.865 [2024-07-13 08:04:51.426086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:109504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.865 [2024-07-13 08:04:51.426096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.865 [2024-07-13 08:04:51.426108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.865 [2024-07-13 08:04:51.426118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.865 [2024-07-13 08:04:51.426130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:110152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.426140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.426379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:110160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.426392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.426404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:110168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.426414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.426689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:110176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.426702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.426713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:110184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.426723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.426735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:110192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.426744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.426757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:110200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.426766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.426907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:110208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.426923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.426935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:109512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.427044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.427058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:109520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.427067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.427207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:109528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.427307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.427324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:109536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.427334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.427345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:109552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.427354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.427741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.427757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.427769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:109576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.427903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.427919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:109592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.428046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.428180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:110216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.428197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.428322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:110224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.428342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.428483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:110232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.428580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.428596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:110240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.428606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.428618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:110248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.428628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.428924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:109616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.429031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.429048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:109624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.429058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.429070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:109632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.429079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.429091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:109640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.429100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.429111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:109696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.429121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.429132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:109720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.429368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.429384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.429394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.429406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.429697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.429818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:110256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.429831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.429844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:110264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.429854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.429866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:110272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.429875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.429887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:110280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.429896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.429908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:110288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.430140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.430177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:110296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.430189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.430299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:110304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.430319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.430332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:110312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.430460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.430479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:110320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.430489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.430625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:110328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.430757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.430918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:110336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.431025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.431042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:110344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.866 [2024-07-13 08:04:51.431052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.431064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:110352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.866 [2024-07-13 08:04:51.431073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.866 [2024-07-13 08:04:51.431085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:110360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.431095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.431107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:110368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.867 [2024-07-13 08:04:51.431122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.431133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:110376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.431386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.431402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:110384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.867 [2024-07-13 08:04:51.431412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.431619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:110392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.431632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.431644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:110400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.431653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.431665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:110408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.431675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.431687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:110416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.431696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.431707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:110424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.431716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.431834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:110432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.431852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.431865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:110440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.867 [2024-07-13 08:04:51.431875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.432139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:110448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.867 [2024-07-13 08:04:51.432153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.432289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:110456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.432395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.432413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:109768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.432423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.432435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:109776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.432444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.432568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:109784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.432585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.432596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:109808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.432832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.432853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:109816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.432863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.432875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.432884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.433012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:109896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.433033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.433167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:109952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.433181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.433322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:110464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.867 [2024-07-13 08:04:51.433467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.433609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:110472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.867 [2024-07-13 08:04:51.433734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.433754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:110480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.433984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.434012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:110488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.434023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.434035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:110496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.434045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.434056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:110504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.867 [2024-07-13 08:04:51.434065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.434077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:110512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.867 [2024-07-13 08:04:51.434086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.434098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:110520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.434107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.434373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:110528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.867 [2024-07-13 08:04:51.434395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.434409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:110536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.867 [2024-07-13 08:04:51.434419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.434545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:110544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.867 [2024-07-13 08:04:51.434555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.434703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:110552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.867 [2024-07-13 08:04:51.434812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.434830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:110560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.867 [2024-07-13 08:04:51.434840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.434852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:110568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.867 [2024-07-13 08:04:51.434861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.434873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:110576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.434882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.434894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:110584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.434903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.435133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.435154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.435168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:109984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.435178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.435302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:109992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.435313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.435539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:110000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.435562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.435575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:110016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.867 [2024-07-13 08:04:51.435585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.867 [2024-07-13 08:04:51.435597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:110032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.435606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.435618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:110040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.435627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.435638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:110056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.435647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.435789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:110592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.435921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.435937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:110600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.868 [2024-07-13 08:04:51.436074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.436211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:110608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.436224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.436366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:110616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.868 [2024-07-13 08:04:51.436513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.436790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:110624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.436930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.437078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:110632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.437169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.437186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:110640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.868 [2024-07-13 08:04:51.437195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.437206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:110648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.437216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.437227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:110656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.437354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.437370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:110664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.437503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.437519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:110672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.437645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.437667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:110680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.868 [2024-07-13 08:04:51.437767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.437797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:110688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.437807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.437819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:110696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.437828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.437840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:110704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.437849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.437861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:110712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.438100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.438124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:110720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.868 [2024-07-13 08:04:51.438135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.438147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:110728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.438167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.438179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:110736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.868 [2024-07-13 08:04:51.438189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.438200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:110744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:18:45.868 [2024-07-13 08:04:51.438210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.438221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:110088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.438230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.438627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:110096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.438650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.438664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:110104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.438674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.438685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:110136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:45.868 [2024-07-13 08:04:51.438695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.438705] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e59a0 is same with the state(5) to be set 00:18:45.868 [2024-07-13 08:04:51.438718] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:45.868 [2024-07-13 08:04:51.438726] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:45.868 [2024-07-13 08:04:51.438734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:110144 len:8 PRP1 0x0 PRP2 0x0 00:18:45.868 [2024-07-13 08:04:51.438743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.439125] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x23e59a0 was disconnected and freed. reset controller. 00:18:45.868 [2024-07-13 08:04:51.439216] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.868 [2024-07-13 08:04:51.439233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.439369] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.868 [2024-07-13 08:04:51.439503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.439518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.868 [2024-07-13 08:04:51.439650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.439663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:18:45.868 [2024-07-13 08:04:51.439804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:45.868 [2024-07-13 08:04:51.440077] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4a20 is same with the state(5) to be set 00:18:45.868 [2024-07-13 08:04:51.440511] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:45.868 [2024-07-13 08:04:51.440549] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b4a20 (9): Bad file descriptor 00:18:45.868 [2024-07-13 08:04:51.440837] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:45.868 [2024-07-13 08:04:51.440911] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:45.868 [2024-07-13 08:04:51.440957] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:45.868 [2024-07-13 08:04:51.441225] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4a20 with addr=10.0.0.2, port=4420 00:18:45.868 [2024-07-13 08:04:51.441242] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4a20 is same with the state(5) to be set 00:18:45.868 [2024-07-13 08:04:51.441265] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b4a20 (9): Bad file descriptor 00:18:45.868 [2024-07-13 08:04:51.441282] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:45.868 [2024-07-13 08:04:51.441292] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:45.868 [2024-07-13 08:04:51.441302] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:45.868 [2024-07-13 08:04:51.441324] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:45.868 [2024-07-13 08:04:51.441445] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:45.868 08:04:51 -- host/timeout.sh@101 -- # sleep 3 00:18:46.803 [2024-07-13 08:04:52.441585] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.803 [2024-07-13 08:04:52.441709] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.803 [2024-07-13 08:04:52.441802] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:46.803 [2024-07-13 08:04:52.441819] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4a20 with addr=10.0.0.2, port=4420 00:18:46.803 [2024-07-13 08:04:52.441831] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4a20 is same with the state(5) to be set 00:18:46.803 [2024-07-13 08:04:52.441884] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b4a20 (9): Bad file descriptor 00:18:46.803 [2024-07-13 08:04:52.442241] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:46.803 [2024-07-13 08:04:52.442253] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:46.803 [2024-07-13 08:04:52.442264] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:46.803 [2024-07-13 08:04:52.442291] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:46.803 [2024-07-13 08:04:52.442304] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:47.735 [2024-07-13 08:04:53.442423] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:47.735 [2024-07-13 08:04:53.442547] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:47.735 [2024-07-13 08:04:53.442593] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:47.735 [2024-07-13 08:04:53.442610] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4a20 with addr=10.0.0.2, port=4420 00:18:47.735 [2024-07-13 08:04:53.442624] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4a20 is same with the state(5) to be set 00:18:47.735 [2024-07-13 08:04:53.442649] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b4a20 (9): Bad file descriptor 00:18:47.735 [2024-07-13 08:04:53.442960] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:47.735 [2024-07-13 08:04:53.442986] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:47.735 [2024-07-13 08:04:53.442998] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:47.735 [2024-07-13 08:04:53.443027] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:47.735 [2024-07-13 08:04:53.443039] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:48.668 [2024-07-13 08:04:54.444148] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:48.668 [2024-07-13 08:04:54.444235] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:48.668 [2024-07-13 08:04:54.444310] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:48.668 [2024-07-13 08:04:54.444343] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b4a20 with addr=10.0.0.2, port=4420 00:18:48.668 [2024-07-13 08:04:54.444356] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b4a20 is same with the state(5) to be set 00:18:48.668 [2024-07-13 08:04:54.444578] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b4a20 (9): Bad file descriptor 00:18:48.668 08:04:54 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:48.668 [2024-07-13 08:04:54.444852] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:48.668 [2024-07-13 08:04:54.444881] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:48.668 [2024-07-13 08:04:54.444891] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:48.668 [2024-07-13 08:04:54.447852] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:48.668 [2024-07-13 08:04:54.447900] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:48.926 [2024-07-13 08:04:54.692962] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:48.926 08:04:54 -- host/timeout.sh@103 -- # wait 80661 00:18:49.863 [2024-07-13 08:04:55.480951] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:55.163 00:18:55.163 Latency(us) 00:18:55.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.163 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:55.163 Verification LBA range: start 0x0 length 0x4000 00:18:55.163 NVMe0n1 : 10.01 7919.64 30.94 5280.50 0.00 9681.80 1042.62 3035150.89 00:18:55.163 =================================================================================================================== 00:18:55.163 Total : 7919.64 30.94 5280.50 0.00 9681.80 0.00 3035150.89 00:18:55.163 0 00:18:55.163 08:05:00 -- host/timeout.sh@105 -- # killprocess 80599 00:18:55.163 08:05:00 -- common/autotest_common.sh@926 -- # '[' -z 80599 ']' 00:18:55.163 08:05:00 -- common/autotest_common.sh@930 -- # kill -0 80599 00:18:55.163 08:05:00 -- common/autotest_common.sh@931 -- # uname 00:18:55.163 08:05:00 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:18:55.163 08:05:00 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80599 00:18:55.163 killing process with pid 80599 00:18:55.163 Received shutdown signal, test time was about 10.000000 seconds 00:18:55.163 00:18:55.163 Latency(us) 00:18:55.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:55.163 =================================================================================================================== 00:18:55.163 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:55.163 08:05:00 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:18:55.163 08:05:00 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:18:55.163 08:05:00 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80599' 00:18:55.163 08:05:00 -- common/autotest_common.sh@945 -- # kill 80599 00:18:55.163 08:05:00 -- common/autotest_common.sh@950 -- # wait 80599 00:18:55.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:55.163 08:05:00 -- host/timeout.sh@110 -- # bdevperf_pid=80715 00:18:55.163 08:05:00 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:18:55.163 08:05:00 -- host/timeout.sh@112 -- # waitforlisten 80715 /var/tmp/bdevperf.sock 00:18:55.163 08:05:00 -- common/autotest_common.sh@819 -- # '[' -z 80715 ']' 00:18:55.163 08:05:00 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:55.163 08:05:00 -- common/autotest_common.sh@824 -- # local max_retries=100 00:18:55.163 08:05:00 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:55.163 08:05:00 -- common/autotest_common.sh@828 -- # xtrace_disable 00:18:55.163 08:05:00 -- common/autotest_common.sh@10 -- # set +x 00:18:55.163 [2024-07-13 08:05:00.530390] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:18:55.163 [2024-07-13 08:05:00.530641] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80715 ] 00:18:55.163 [2024-07-13 08:05:00.664098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:55.163 [2024-07-13 08:05:00.704862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:55.730 08:05:01 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:18:55.730 08:05:01 -- common/autotest_common.sh@852 -- # return 0 00:18:55.730 08:05:01 -- host/timeout.sh@116 -- # dtrace_pid=80724 00:18:55.730 08:05:01 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 80715 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:18:55.730 08:05:01 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:18:56.295 08:05:01 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:18:56.553 NVMe0n1 00:18:56.553 08:05:02 -- host/timeout.sh@124 -- # rpc_pid=80761 00:18:56.553 08:05:02 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:56.553 08:05:02 -- host/timeout.sh@125 -- # sleep 1 00:18:56.553 Running I/O for 10 seconds... 00:18:57.492 08:05:03 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:57.755 [2024-07-13 08:05:03.404755] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.755 [2024-07-13 08:05:03.404834] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.755 [2024-07-13 08:05:03.404847] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.755 [2024-07-13 08:05:03.404856] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.755 [2024-07-13 08:05:03.404865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.755 [2024-07-13 08:05:03.404873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.755 [2024-07-13 08:05:03.404881] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.404890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.404898] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.404907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.404915] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.404923] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.404931] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.404955] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.404963] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.404972] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.404980] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.404988] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.404996] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405003] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405011] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405019] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405027] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405035] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405043] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405051] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405059] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405066] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405075] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405082] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405090] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405116] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405124] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405133] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405143] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405151] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405159] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405168] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405176] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405184] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405193] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405201] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405209] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405218] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405226] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405234] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405242] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405251] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405259] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405267] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405275] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405284] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-13 08:05:03.405292] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with id:0 cdw10:00000000 cdw11:00000000 00:18:57.756 the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405300] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405309] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.756 [2024-07-13 08:05:03.405317] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405326] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with [2024-07-13 08:05:03.405326] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsthe state(5) to be set 00:18:57.756 id:0 cdw10:00000000 cdw11:00000000 00:18:57.756 [2024-07-13 08:05:03.405335] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.756 [2024-07-13 08:05:03.405344] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.756 [2024-07-13 08:05:03.405352] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405361] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405370] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405378] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405386] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405399] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405407] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405416] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405424] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405432] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405440] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405449] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405457] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405465] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405473] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.756 [2024-07-13 08:05:03.405487] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405495] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405503] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405511] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405520] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405528] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405536] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405544] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405552] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405560] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405568] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405576] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405584] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405592] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405600] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.756 [2024-07-13 08:05:03.405609] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405618] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405626] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405634] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405642] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405651] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405659] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405667] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405675] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405684] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405692] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405700] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405708] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405716] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405724] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405732] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405740] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405748] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405756] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405765] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405773] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405781] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405789] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405797] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405814] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405824] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405832] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405840] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405848] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405857] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405865] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405873] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405882] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405890] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405899] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405907] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405916] tcp.c:1574:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d9be0 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.405925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:18:57.757 [2024-07-13 08:05:03.405944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.405957] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2468a40 is same with the state(5) to be set 00:18:57.757 [2024-07-13 08:05:03.406473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.406504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.406529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.406541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.406553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.406563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.406575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:100816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.406585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.406597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.406609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.406621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:113080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.406630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.406642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.406885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.406906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.406917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.407155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:122424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.407184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.407198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:127600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.407209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.407221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:119392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.407231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.407244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:115768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.407253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.407266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.407276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.407297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.407307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.407319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:28616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.407589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.407832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:109568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.407863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.407877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.407887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.407899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:126792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.407911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.407923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:78864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.407933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.407944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:26688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.407954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.407966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:21016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.407976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.408318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:23368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.408345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.408361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:84320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.408371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.408384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.757 [2024-07-13 08:05:03.408394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.757 [2024-07-13 08:05:03.408406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.408416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.408428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:15544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.408438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.408450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:30032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.408460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.408471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:17320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.408824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.408847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.408859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.408871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:113248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.408881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.408893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.408903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.408915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:91168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.408925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.409263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.409286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.409300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:70256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.409311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.409323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.409333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.409345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.409355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.409367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:75088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.409377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.409389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:125592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.409399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.409729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:115920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.409754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.409768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:123520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.409794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.409807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:126744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.409818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.409830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:49168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.409840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.409852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.409867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.409879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:77512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.409889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.409977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.409990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.410002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.410011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.410165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:19416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.410298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.410427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.410450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.410710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.410953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.410972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.410983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.410995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.411005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.411017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:86472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.411027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.411039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.411049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.411166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.411180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.411315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.411457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.411737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:25576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.411899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.412140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.412157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.412170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.412180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.412314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:4608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.412336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.412613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.412764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.412880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:67616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.412895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.412907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.412917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.412929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:36272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.412939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.412951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:107064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.412960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.412972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:122960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.412982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.413259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:64640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.413536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.758 [2024-07-13 08:05:03.413689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:122944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.758 [2024-07-13 08:05:03.413822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.413839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:128720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.413849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.413862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.413873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.413885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:26304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.413895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.413907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:97728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.413917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.413928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:41928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.413938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.414282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.414297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.414311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:124392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.414321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.414333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.414343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.414355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.414365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.414377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.414386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.414398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.414408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.414419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:90984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.414429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.414790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.414807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.414819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.414958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.415075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.415089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.415101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:56624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.415111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.415123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.415133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.415145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.415155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.415167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.415176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.415189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.415198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.415546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.415561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.415573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.415583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.415596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:49768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.415607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.415619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.415760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.415904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.416048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.416134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:47680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.416148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.416159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.416169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.416181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:33272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.416191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.416318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:13328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.416340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.416460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:50528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.416474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.416601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.416619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.416737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:74816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.416751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.416896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:25184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.417016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.417034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.417171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.417192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.417328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.417424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.417438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.417451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:127680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.417461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.417473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.417482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.417494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:109560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.417504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.417632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:20776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.417649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.417895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:87064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.417915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.418040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:23592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.759 [2024-07-13 08:05:03.418057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.759 [2024-07-13 08:05:03.418202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.418286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.418302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:122472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.418313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.418325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:124120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.418335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.418347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:34984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.418465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.418483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.418494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.418517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:51824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.418602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.418617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.418627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.418639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:70792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.418649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.418786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:23920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.418898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.418916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.418927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.418940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:56296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.418949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.419044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:41384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.419063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.419077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:32656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.419087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.419210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:70336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.419316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.419342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.419353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.419367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.419616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.419635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:125120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.419646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.419658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:18:57.760 [2024-07-13 08:05:03.419669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.419680] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2497040 is same with the state(5) to be set 00:18:57.760 [2024-07-13 08:05:03.419787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:18:57.760 [2024-07-13 08:05:03.419802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:18:57.760 [2024-07-13 08:05:03.419812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:74128 len:8 PRP1 0x0 PRP2 0x0 00:18:57.760 [2024-07-13 08:05:03.419822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:57.760 [2024-07-13 08:05:03.420177] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2497040 was disconnected and freed. reset controller. 00:18:57.760 [2024-07-13 08:05:03.420253] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2468a40 (9): Bad file descriptor 00:18:57.760 [2024-07-13 08:05:03.420699] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:57.760 [2024-07-13 08:05:03.420951] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:57.760 [2024-07-13 08:05:03.421035] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:57.760 [2024-07-13 08:05:03.421278] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:57.760 [2024-07-13 08:05:03.421312] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2468a40 with addr=10.0.0.2, port=4420 00:18:57.760 [2024-07-13 08:05:03.421325] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2468a40 is same with the state(5) to be set 00:18:57.760 [2024-07-13 08:05:03.421349] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2468a40 (9): Bad file descriptor 00:18:57.760 [2024-07-13 08:05:03.421368] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:57.760 [2024-07-13 08:05:03.421457] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:57.760 [2024-07-13 08:05:03.421471] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:57.760 [2024-07-13 08:05:03.421495] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:57.760 [2024-07-13 08:05:03.421636] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:57.760 08:05:03 -- host/timeout.sh@128 -- # wait 80761 00:18:59.661 [2024-07-13 08:05:05.421899] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:18:59.661 [2024-07-13 08:05:05.422000] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:59.661 [2024-07-13 08:05:05.422048] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:18:59.661 [2024-07-13 08:05:05.422066] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2468a40 with addr=10.0.0.2, port=4420 00:18:59.661 [2024-07-13 08:05:05.422080] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2468a40 is same with the state(5) to be set 00:18:59.661 [2024-07-13 08:05:05.422106] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2468a40 (9): Bad file descriptor 00:18:59.661 [2024-07-13 08:05:05.422127] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:18:59.661 [2024-07-13 08:05:05.422137] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:18:59.661 [2024-07-13 08:05:05.422148] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:18:59.661 [2024-07-13 08:05:05.422187] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:18:59.661 [2024-07-13 08:05:05.422200] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:02.192 [2024-07-13 08:05:07.422344] uring.c: 641:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:19:02.192 [2024-07-13 08:05:07.422439] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:02.192 [2024-07-13 08:05:07.422488] posix.c:1032:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:19:02.192 [2024-07-13 08:05:07.422507] nvme_tcp.c:2289:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2468a40 with addr=10.0.0.2, port=4420 00:19:02.192 [2024-07-13 08:05:07.422520] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2468a40 is same with the state(5) to be set 00:19:02.192 [2024-07-13 08:05:07.422546] nvme_tcp.c:2098:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2468a40 (9): Bad file descriptor 00:19:02.192 [2024-07-13 08:05:07.422567] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:02.192 [2024-07-13 08:05:07.422577] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:02.192 [2024-07-13 08:05:07.422588] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:02.192 [2024-07-13 08:05:07.422615] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:02.192 [2024-07-13 08:05:07.422947] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:04.096 [2024-07-13 08:05:09.423022] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:04.096 [2024-07-13 08:05:09.423115] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:04.096 [2024-07-13 08:05:09.423130] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:19:04.096 [2024-07-13 08:05:09.423141] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:19:04.096 [2024-07-13 08:05:09.423168] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:19:04.662 00:19:04.662 Latency(us) 00:19:04.662 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.662 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:19:04.662 NVMe0n1 : 8.16 1898.71 7.42 15.69 0.00 66898.88 8519.68 7046430.72 00:19:04.662 =================================================================================================================== 00:19:04.663 Total : 1898.71 7.42 15.69 0.00 66898.88 8519.68 7046430.72 00:19:04.663 0 00:19:04.663 08:05:10 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:04.663 Attaching 5 probes... 00:19:04.663 1379.573924: reset bdev controller NVMe0 00:19:04.663 1379.634622: reconnect bdev controller NVMe0 00:19:04.663 3380.658279: reconnect delay bdev controller NVMe0 00:19:04.663 3380.685753: reconnect bdev controller NVMe0 00:19:04.663 5381.112357: reconnect delay bdev controller NVMe0 00:19:04.663 5381.133691: reconnect bdev controller NVMe0 00:19:04.663 7381.880884: reconnect delay bdev controller NVMe0 00:19:04.663 7381.900858: reconnect bdev controller NVMe0 00:19:04.663 08:05:10 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:19:04.663 08:05:10 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:19:04.663 08:05:10 -- host/timeout.sh@136 -- # kill 80724 00:19:04.663 08:05:10 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:19:04.663 08:05:10 -- host/timeout.sh@139 -- # killprocess 80715 00:19:04.663 08:05:10 -- common/autotest_common.sh@926 -- # '[' -z 80715 ']' 00:19:04.663 08:05:10 -- common/autotest_common.sh@930 -- # kill -0 80715 00:19:04.663 08:05:10 -- common/autotest_common.sh@931 -- # uname 00:19:04.663 08:05:10 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:04.663 08:05:10 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80715 00:19:04.921 killing process with pid 80715 00:19:04.921 Received shutdown signal, test time was about 8.221825 seconds 00:19:04.921 00:19:04.921 Latency(us) 00:19:04.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:04.921 =================================================================================================================== 00:19:04.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:04.921 08:05:10 -- common/autotest_common.sh@932 -- # process_name=reactor_2 00:19:04.921 08:05:10 -- common/autotest_common.sh@936 -- # '[' reactor_2 = sudo ']' 00:19:04.921 08:05:10 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80715' 00:19:04.921 08:05:10 -- common/autotest_common.sh@945 -- # kill 80715 00:19:04.921 08:05:10 -- common/autotest_common.sh@950 -- # wait 80715 00:19:04.921 08:05:10 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:05.179 08:05:10 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:19:05.179 08:05:10 -- host/timeout.sh@145 -- # nvmftestfini 00:19:05.179 08:05:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:05.179 08:05:10 -- nvmf/common.sh@116 -- # sync 00:19:05.179 08:05:10 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:19:05.179 08:05:10 -- nvmf/common.sh@119 -- # set +e 00:19:05.179 08:05:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:05.179 08:05:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:19:05.179 rmmod nvme_tcp 00:19:05.179 rmmod nvme_fabrics 00:19:05.179 rmmod nvme_keyring 00:19:05.436 08:05:11 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:05.437 08:05:11 -- nvmf/common.sh@123 -- # set -e 00:19:05.437 08:05:11 -- nvmf/common.sh@124 -- # return 0 00:19:05.437 08:05:11 -- nvmf/common.sh@477 -- # '[' -n 80483 ']' 00:19:05.437 08:05:11 -- nvmf/common.sh@478 -- # killprocess 80483 00:19:05.437 08:05:11 -- common/autotest_common.sh@926 -- # '[' -z 80483 ']' 00:19:05.437 08:05:11 -- common/autotest_common.sh@930 -- # kill -0 80483 00:19:05.437 08:05:11 -- common/autotest_common.sh@931 -- # uname 00:19:05.437 08:05:11 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:19:05.437 08:05:11 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 80483 00:19:05.437 killing process with pid 80483 00:19:05.437 08:05:11 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:19:05.437 08:05:11 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:19:05.437 08:05:11 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 80483' 00:19:05.437 08:05:11 -- common/autotest_common.sh@945 -- # kill 80483 00:19:05.437 08:05:11 -- common/autotest_common.sh@950 -- # wait 80483 00:19:05.437 08:05:11 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:05.437 08:05:11 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:19:05.437 08:05:11 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:19:05.437 08:05:11 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:05.437 08:05:11 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:19:05.437 08:05:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.437 08:05:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:05.437 08:05:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.437 08:05:11 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:19:05.695 00:19:05.695 real 0m46.960s 00:19:05.695 user 2m18.171s 00:19:05.695 sys 0m5.548s 00:19:05.695 08:05:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:05.695 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:19:05.695 ************************************ 00:19:05.695 END TEST nvmf_timeout 00:19:05.695 ************************************ 00:19:05.695 08:05:11 -- nvmf/nvmf.sh@120 -- # [[ virt == phy ]] 00:19:05.695 08:05:11 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:19:05.695 08:05:11 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:05.695 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:19:05.695 08:05:11 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:19:05.695 00:19:05.695 real 10m21.376s 00:19:05.695 user 29m1.615s 00:19:05.695 sys 3m23.105s 00:19:05.695 08:05:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:05.695 ************************************ 00:19:05.695 END TEST nvmf_tcp 00:19:05.695 ************************************ 00:19:05.695 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:19:05.695 08:05:11 -- spdk/autotest.sh@296 -- # [[ 1 -eq 0 ]] 00:19:05.695 08:05:11 -- spdk/autotest.sh@300 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:05.695 08:05:11 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:05.695 08:05:11 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:05.695 08:05:11 -- common/autotest_common.sh@10 -- # set +x 00:19:05.695 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:19:05.695 ************************************ 00:19:05.695 START TEST nvmf_dif 00:19:05.695 ************************************ 00:19:05.695 08:05:11 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:19:05.695 * Looking for test storage... 00:19:05.695 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:05.695 08:05:11 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:05.695 08:05:11 -- nvmf/common.sh@7 -- # uname -s 00:19:05.696 08:05:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:05.696 08:05:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:05.696 08:05:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:05.696 08:05:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:05.696 08:05:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:05.696 08:05:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:05.696 08:05:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:05.696 08:05:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:05.696 08:05:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:05.696 08:05:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:05.696 08:05:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:19:05.696 08:05:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:19:05.696 08:05:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:05.696 08:05:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:05.696 08:05:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:05.696 08:05:11 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:05.696 08:05:11 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:05.696 08:05:11 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:05.696 08:05:11 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:05.696 08:05:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.696 08:05:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.696 08:05:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.696 08:05:11 -- paths/export.sh@5 -- # export PATH 00:19:05.696 08:05:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:05.696 08:05:11 -- nvmf/common.sh@46 -- # : 0 00:19:05.696 08:05:11 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:05.696 08:05:11 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:05.696 08:05:11 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:05.696 08:05:11 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:05.696 08:05:11 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:05.696 08:05:11 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:05.696 08:05:11 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:05.696 08:05:11 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:05.696 08:05:11 -- target/dif.sh@15 -- # NULL_META=16 00:19:05.696 08:05:11 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:19:05.696 08:05:11 -- target/dif.sh@15 -- # NULL_SIZE=64 00:19:05.696 08:05:11 -- target/dif.sh@15 -- # NULL_DIF=1 00:19:05.696 08:05:11 -- target/dif.sh@135 -- # nvmftestinit 00:19:05.696 08:05:11 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:19:05.696 08:05:11 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:05.696 08:05:11 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:05.696 08:05:11 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:05.696 08:05:11 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:05.696 08:05:11 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:05.696 08:05:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:19:05.696 08:05:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:05.696 08:05:11 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:19:05.696 08:05:11 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:19:05.696 08:05:11 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:19:05.696 08:05:11 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:19:05.696 08:05:11 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:19:05.696 08:05:11 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:19:05.696 08:05:11 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:05.696 08:05:11 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:05.696 08:05:11 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:05.696 08:05:11 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:19:05.696 08:05:11 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:05.696 08:05:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:05.696 08:05:11 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:05.696 08:05:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:05.696 08:05:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:05.696 08:05:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:05.696 08:05:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:05.696 08:05:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:05.696 08:05:11 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:19:05.955 08:05:11 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:19:05.955 Cannot find device "nvmf_tgt_br" 00:19:05.955 08:05:11 -- nvmf/common.sh@154 -- # true 00:19:05.955 08:05:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:19:05.955 Cannot find device "nvmf_tgt_br2" 00:19:05.955 08:05:11 -- nvmf/common.sh@155 -- # true 00:19:05.955 08:05:11 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:19:05.955 08:05:11 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:19:05.955 Cannot find device "nvmf_tgt_br" 00:19:05.955 08:05:11 -- nvmf/common.sh@157 -- # true 00:19:05.955 08:05:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:19:05.955 Cannot find device "nvmf_tgt_br2" 00:19:05.955 08:05:11 -- nvmf/common.sh@158 -- # true 00:19:05.955 08:05:11 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:19:05.955 08:05:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:19:05.955 08:05:11 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:05.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:05.955 08:05:11 -- nvmf/common.sh@161 -- # true 00:19:05.955 08:05:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:05.955 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:05.955 08:05:11 -- nvmf/common.sh@162 -- # true 00:19:05.955 08:05:11 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:19:05.955 08:05:11 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:05.955 08:05:11 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:05.955 08:05:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:05.955 08:05:11 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:05.955 08:05:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:05.955 08:05:11 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:05.955 08:05:11 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:05.955 08:05:11 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:05.955 08:05:11 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:19:05.955 08:05:11 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:19:05.955 08:05:11 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:19:05.955 08:05:11 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:19:05.955 08:05:11 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:06.218 08:05:11 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:06.218 08:05:11 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:06.218 08:05:11 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:19:06.218 08:05:11 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:19:06.218 08:05:11 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:19:06.218 08:05:11 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:06.218 08:05:11 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:06.218 08:05:11 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:06.218 08:05:11 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:06.218 08:05:11 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:19:06.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:06.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:19:06.218 00:19:06.218 --- 10.0.0.2 ping statistics --- 00:19:06.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.218 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:19:06.218 08:05:11 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:19:06.218 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:06.218 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:19:06.218 00:19:06.218 --- 10.0.0.3 ping statistics --- 00:19:06.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.218 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:06.218 08:05:11 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:06.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:06.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:19:06.218 00:19:06.218 --- 10.0.0.1 ping statistics --- 00:19:06.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:06.218 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:19:06.218 08:05:11 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:06.218 08:05:11 -- nvmf/common.sh@421 -- # return 0 00:19:06.218 08:05:11 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:19:06.218 08:05:11 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:19:06.478 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:19:06.478 0000:00:06.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:06.478 0000:00:07.0 (1b36 0010): Already using the uio_pci_generic driver 00:19:06.478 08:05:12 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:06.478 08:05:12 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:19:06.478 08:05:12 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:19:06.478 08:05:12 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:06.478 08:05:12 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:19:06.478 08:05:12 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:19:06.478 08:05:12 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:19:06.478 08:05:12 -- target/dif.sh@137 -- # nvmfappstart 00:19:06.478 08:05:12 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:06.478 08:05:12 -- common/autotest_common.sh@712 -- # xtrace_disable 00:19:06.478 08:05:12 -- common/autotest_common.sh@10 -- # set +x 00:19:06.478 08:05:12 -- nvmf/common.sh@469 -- # nvmfpid=81141 00:19:06.478 08:05:12 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:06.478 08:05:12 -- nvmf/common.sh@470 -- # waitforlisten 81141 00:19:06.478 08:05:12 -- common/autotest_common.sh@819 -- # '[' -z 81141 ']' 00:19:06.478 08:05:12 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.478 08:05:12 -- common/autotest_common.sh@824 -- # local max_retries=100 00:19:06.478 08:05:12 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.478 08:05:12 -- common/autotest_common.sh@828 -- # xtrace_disable 00:19:06.478 08:05:12 -- common/autotest_common.sh@10 -- # set +x 00:19:06.736 [2024-07-13 08:05:12.370190] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:19:06.736 [2024-07-13 08:05:12.370558] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.736 [2024-07-13 08:05:12.514144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.995 [2024-07-13 08:05:12.557165] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:06.995 [2024-07-13 08:05:12.557345] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.995 [2024-07-13 08:05:12.557361] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.995 [2024-07-13 08:05:12.557373] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.995 [2024-07-13 08:05:12.557407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:06.995 08:05:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:19:06.995 08:05:12 -- common/autotest_common.sh@852 -- # return 0 00:19:06.995 08:05:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:06.995 08:05:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:19:06.995 08:05:12 -- common/autotest_common.sh@10 -- # set +x 00:19:06.995 08:05:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.995 08:05:12 -- target/dif.sh@139 -- # create_transport 00:19:06.995 08:05:12 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:19:06.995 08:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:06.995 08:05:12 -- common/autotest_common.sh@10 -- # set +x 00:19:06.995 [2024-07-13 08:05:12.714761] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:06.995 08:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:06.995 08:05:12 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:19:06.995 08:05:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:06.995 08:05:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:06.995 08:05:12 -- common/autotest_common.sh@10 -- # set +x 00:19:06.995 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:19:06.995 ************************************ 00:19:06.995 START TEST fio_dif_1_default 00:19:06.995 ************************************ 00:19:06.995 08:05:12 -- common/autotest_common.sh@1104 -- # fio_dif_1 00:19:06.995 08:05:12 -- target/dif.sh@86 -- # create_subsystems 0 00:19:06.995 08:05:12 -- target/dif.sh@28 -- # local sub 00:19:06.995 08:05:12 -- target/dif.sh@30 -- # for sub in "$@" 00:19:06.995 08:05:12 -- target/dif.sh@31 -- # create_subsystem 0 00:19:06.995 08:05:12 -- target/dif.sh@18 -- # local sub_id=0 00:19:06.995 08:05:12 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:06.995 08:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:06.995 08:05:12 -- common/autotest_common.sh@10 -- # set +x 00:19:06.995 bdev_null0 00:19:06.995 08:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:06.995 08:05:12 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:06.995 08:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:06.995 08:05:12 -- common/autotest_common.sh@10 -- # set +x 00:19:06.995 08:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:06.995 08:05:12 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:06.995 08:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:06.995 08:05:12 -- common/autotest_common.sh@10 -- # set +x 00:19:06.995 08:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:06.995 08:05:12 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:06.995 08:05:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:06.995 08:05:12 -- common/autotest_common.sh@10 -- # set +x 00:19:06.995 [2024-07-13 08:05:12.762943] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:06.995 08:05:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:06.995 08:05:12 -- target/dif.sh@87 -- # fio /dev/fd/62 00:19:06.995 08:05:12 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:19:06.995 08:05:12 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:06.995 08:05:12 -- nvmf/common.sh@520 -- # config=() 00:19:06.995 08:05:12 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:06.995 08:05:12 -- nvmf/common.sh@520 -- # local subsystem config 00:19:06.995 08:05:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:06.995 08:05:12 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:06.995 08:05:12 -- target/dif.sh@82 -- # gen_fio_conf 00:19:06.995 08:05:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:06.995 { 00:19:06.995 "params": { 00:19:06.995 "name": "Nvme$subsystem", 00:19:06.995 "trtype": "$TEST_TRANSPORT", 00:19:06.995 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:06.995 "adrfam": "ipv4", 00:19:06.995 "trsvcid": "$NVMF_PORT", 00:19:06.995 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:06.995 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:06.995 "hdgst": ${hdgst:-false}, 00:19:06.995 "ddgst": ${ddgst:-false} 00:19:06.995 }, 00:19:06.995 "method": "bdev_nvme_attach_controller" 00:19:06.995 } 00:19:06.995 EOF 00:19:06.995 )") 00:19:06.995 08:05:12 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:19:06.995 08:05:12 -- target/dif.sh@54 -- # local file 00:19:06.995 08:05:12 -- target/dif.sh@56 -- # cat 00:19:06.995 08:05:12 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:06.995 08:05:12 -- common/autotest_common.sh@1318 -- # local sanitizers 00:19:06.995 08:05:12 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:06.995 08:05:12 -- nvmf/common.sh@542 -- # cat 00:19:06.995 08:05:12 -- common/autotest_common.sh@1320 -- # shift 00:19:06.995 08:05:12 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:19:06.995 08:05:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:06.995 08:05:12 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:06.995 08:05:12 -- common/autotest_common.sh@1324 -- # grep libasan 00:19:06.995 08:05:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:06.995 08:05:12 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:06.995 08:05:12 -- nvmf/common.sh@544 -- # jq . 00:19:06.995 08:05:12 -- target/dif.sh@72 -- # (( file <= files )) 00:19:06.995 08:05:12 -- nvmf/common.sh@545 -- # IFS=, 00:19:06.995 08:05:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:06.995 "params": { 00:19:06.995 "name": "Nvme0", 00:19:06.995 "trtype": "tcp", 00:19:06.995 "traddr": "10.0.0.2", 00:19:06.995 "adrfam": "ipv4", 00:19:06.995 "trsvcid": "4420", 00:19:06.995 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:06.995 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:06.995 "hdgst": false, 00:19:06.995 "ddgst": false 00:19:06.995 }, 00:19:06.995 "method": "bdev_nvme_attach_controller" 00:19:06.995 }' 00:19:06.995 08:05:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:06.995 08:05:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:06.995 08:05:12 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:06.996 08:05:12 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:06.996 08:05:12 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:19:06.996 08:05:12 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:07.254 08:05:12 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:07.254 08:05:12 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:07.254 08:05:12 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:07.254 08:05:12 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:07.254 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:07.254 fio-3.35 00:19:07.254 Starting 1 thread 00:19:07.512 [2024-07-13 08:05:13.301497] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:07.512 [2024-07-13 08:05:13.301585] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:19.728 00:19:19.728 filename0: (groupid=0, jobs=1): err= 0: pid=81193: Sat Jul 13 08:05:23 2024 00:19:19.728 read: IOPS=7671, BW=30.0MiB/s (31.4MB/s)(300MiB/10001msec) 00:19:19.728 slat (nsec): min=6626, max=69181, avg=10162.95, stdev=5324.00 00:19:19.728 clat (usec): min=360, max=5223, avg=491.30, stdev=61.38 00:19:19.728 lat (usec): min=367, max=5280, avg=501.46, stdev=62.07 00:19:19.728 clat percentiles (usec): 00:19:19.728 | 1.00th=[ 392], 5.00th=[ 416], 10.00th=[ 429], 20.00th=[ 449], 00:19:19.728 | 30.00th=[ 461], 40.00th=[ 474], 50.00th=[ 490], 60.00th=[ 502], 00:19:19.728 | 70.00th=[ 515], 80.00th=[ 529], 90.00th=[ 553], 95.00th=[ 578], 00:19:19.728 | 99.00th=[ 619], 99.50th=[ 635], 99.90th=[ 701], 99.95th=[ 734], 00:19:19.728 | 99.99th=[ 1647] 00:19:19.728 bw ( KiB/s): min=29029, max=31296, per=100.00%, avg=30721.95, stdev=466.93, samples=19 00:19:19.728 iops : min= 7257, max= 7824, avg=7680.47, stdev=116.78, samples=19 00:19:19.728 lat (usec) : 500=59.14%, 750=40.82%, 1000=0.02% 00:19:19.728 lat (msec) : 2=0.01%, 10=0.01% 00:19:19.728 cpu : usr=84.44%, sys=13.38%, ctx=11, majf=0, minf=8 00:19:19.728 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:19.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.728 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:19.728 issued rwts: total=76720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:19.728 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:19.728 00:19:19.728 Run status group 0 (all jobs): 00:19:19.728 READ: bw=30.0MiB/s (31.4MB/s), 30.0MiB/s-30.0MiB/s (31.4MB/s-31.4MB/s), io=300MiB (314MB), run=10001-10001msec 00:19:19.728 08:05:23 -- target/dif.sh@88 -- # destroy_subsystems 0 00:19:19.728 08:05:23 -- target/dif.sh@43 -- # local sub 00:19:19.728 08:05:23 -- target/dif.sh@45 -- # for sub in "$@" 00:19:19.728 08:05:23 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:19.728 08:05:23 -- target/dif.sh@36 -- # local sub_id=0 00:19:19.728 08:05:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:19.728 08:05:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:19.728 08:05:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.728 08:05:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:19.728 08:05:23 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:19.728 08:05:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:19.728 08:05:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.728 ************************************ 00:19:19.728 END TEST fio_dif_1_default 00:19:19.728 ************************************ 00:19:19.728 08:05:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:19.728 00:19:19.728 real 0m10.859s 00:19:19.728 user 0m8.973s 00:19:19.728 sys 0m1.583s 00:19:19.728 08:05:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:19.728 08:05:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.728 08:05:23 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:19:19.728 08:05:23 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:19.728 08:05:23 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:19.728 08:05:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.728 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:19:19.728 ************************************ 00:19:19.728 START TEST fio_dif_1_multi_subsystems 00:19:19.728 ************************************ 00:19:19.728 08:05:23 -- common/autotest_common.sh@1104 -- # fio_dif_1_multi_subsystems 00:19:19.728 08:05:23 -- target/dif.sh@92 -- # local files=1 00:19:19.728 08:05:23 -- target/dif.sh@94 -- # create_subsystems 0 1 00:19:19.728 08:05:23 -- target/dif.sh@28 -- # local sub 00:19:19.728 08:05:23 -- target/dif.sh@30 -- # for sub in "$@" 00:19:19.728 08:05:23 -- target/dif.sh@31 -- # create_subsystem 0 00:19:19.728 08:05:23 -- target/dif.sh@18 -- # local sub_id=0 00:19:19.728 08:05:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:19.728 08:05:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:19.728 08:05:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.728 bdev_null0 00:19:19.728 08:05:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:19.728 08:05:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:19.728 08:05:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:19.728 08:05:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.728 08:05:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:19.728 08:05:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:19.728 08:05:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:19.728 08:05:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.728 08:05:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:19.728 08:05:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:19.728 08:05:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:19.728 08:05:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.728 [2024-07-13 08:05:23.677356] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:19.728 08:05:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:19.729 08:05:23 -- target/dif.sh@30 -- # for sub in "$@" 00:19:19.729 08:05:23 -- target/dif.sh@31 -- # create_subsystem 1 00:19:19.729 08:05:23 -- target/dif.sh@18 -- # local sub_id=1 00:19:19.729 08:05:23 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:19.729 08:05:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:19.729 08:05:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.729 bdev_null1 00:19:19.729 08:05:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:19.729 08:05:23 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:19.729 08:05:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:19.729 08:05:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.729 08:05:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:19.729 08:05:23 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:19.729 08:05:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:19.729 08:05:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.729 08:05:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:19.729 08:05:23 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:19.729 08:05:23 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:19.729 08:05:23 -- common/autotest_common.sh@10 -- # set +x 00:19:19.729 08:05:23 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:19.729 08:05:23 -- target/dif.sh@95 -- # fio /dev/fd/62 00:19:19.729 08:05:23 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:19:19.729 08:05:23 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:19.729 08:05:23 -- nvmf/common.sh@520 -- # config=() 00:19:19.729 08:05:23 -- nvmf/common.sh@520 -- # local subsystem config 00:19:19.729 08:05:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:19.729 08:05:23 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:19.729 08:05:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:19.729 { 00:19:19.729 "params": { 00:19:19.729 "name": "Nvme$subsystem", 00:19:19.729 "trtype": "$TEST_TRANSPORT", 00:19:19.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.729 "adrfam": "ipv4", 00:19:19.729 "trsvcid": "$NVMF_PORT", 00:19:19.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.729 "hdgst": ${hdgst:-false}, 00:19:19.729 "ddgst": ${ddgst:-false} 00:19:19.729 }, 00:19:19.729 "method": "bdev_nvme_attach_controller" 00:19:19.729 } 00:19:19.729 EOF 00:19:19.729 )") 00:19:19.729 08:05:23 -- target/dif.sh@82 -- # gen_fio_conf 00:19:19.729 08:05:23 -- target/dif.sh@54 -- # local file 00:19:19.729 08:05:23 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:19.729 08:05:23 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:19:19.729 08:05:23 -- target/dif.sh@56 -- # cat 00:19:19.729 08:05:23 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:19.729 08:05:23 -- common/autotest_common.sh@1318 -- # local sanitizers 00:19:19.729 08:05:23 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:19.729 08:05:23 -- common/autotest_common.sh@1320 -- # shift 00:19:19.729 08:05:23 -- nvmf/common.sh@542 -- # cat 00:19:19.729 08:05:23 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:19:19.729 08:05:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:19.729 08:05:23 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:19.729 08:05:23 -- common/autotest_common.sh@1324 -- # grep libasan 00:19:19.729 08:05:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:19.729 08:05:23 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:19.729 08:05:23 -- target/dif.sh@72 -- # (( file <= files )) 00:19:19.729 08:05:23 -- target/dif.sh@73 -- # cat 00:19:19.729 08:05:23 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:19.729 08:05:23 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:19.729 { 00:19:19.729 "params": { 00:19:19.729 "name": "Nvme$subsystem", 00:19:19.729 "trtype": "$TEST_TRANSPORT", 00:19:19.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:19.729 "adrfam": "ipv4", 00:19:19.729 "trsvcid": "$NVMF_PORT", 00:19:19.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:19.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:19.729 "hdgst": ${hdgst:-false}, 00:19:19.729 "ddgst": ${ddgst:-false} 00:19:19.729 }, 00:19:19.729 "method": "bdev_nvme_attach_controller" 00:19:19.729 } 00:19:19.729 EOF 00:19:19.729 )") 00:19:19.729 08:05:23 -- nvmf/common.sh@542 -- # cat 00:19:19.729 08:05:23 -- target/dif.sh@72 -- # (( file++ )) 00:19:19.729 08:05:23 -- target/dif.sh@72 -- # (( file <= files )) 00:19:19.729 08:05:23 -- nvmf/common.sh@544 -- # jq . 00:19:19.729 08:05:23 -- nvmf/common.sh@545 -- # IFS=, 00:19:19.729 08:05:23 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:19.729 "params": { 00:19:19.729 "name": "Nvme0", 00:19:19.729 "trtype": "tcp", 00:19:19.729 "traddr": "10.0.0.2", 00:19:19.729 "adrfam": "ipv4", 00:19:19.729 "trsvcid": "4420", 00:19:19.729 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:19.729 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:19.729 "hdgst": false, 00:19:19.729 "ddgst": false 00:19:19.729 }, 00:19:19.729 "method": "bdev_nvme_attach_controller" 00:19:19.729 },{ 00:19:19.729 "params": { 00:19:19.729 "name": "Nvme1", 00:19:19.729 "trtype": "tcp", 00:19:19.729 "traddr": "10.0.0.2", 00:19:19.729 "adrfam": "ipv4", 00:19:19.729 "trsvcid": "4420", 00:19:19.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:19.729 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:19.729 "hdgst": false, 00:19:19.729 "ddgst": false 00:19:19.729 }, 00:19:19.729 "method": "bdev_nvme_attach_controller" 00:19:19.729 }' 00:19:19.729 08:05:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:19.729 08:05:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:19.729 08:05:23 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:19.729 08:05:23 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:19.729 08:05:23 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:19:19.729 08:05:23 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:19.729 08:05:23 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:19.729 08:05:23 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:19.729 08:05:23 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:19.729 08:05:23 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:19.729 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:19.729 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:19:19.729 fio-3.35 00:19:19.729 Starting 2 threads 00:19:19.729 [2024-07-13 08:05:24.323079] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:19.729 [2024-07-13 08:05:24.323202] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:29.699 00:19:29.699 filename0: (groupid=0, jobs=1): err= 0: pid=81287: Sat Jul 13 08:05:34 2024 00:19:29.699 read: IOPS=4451, BW=17.4MiB/s (18.2MB/s)(174MiB/10001msec) 00:19:29.699 slat (nsec): min=6129, max=71155, avg=15247.22, stdev=5890.47 00:19:29.699 clat (usec): min=623, max=5498, avg=857.63, stdev=76.96 00:19:29.699 lat (usec): min=632, max=5518, avg=872.88, stdev=77.62 00:19:29.699 clat percentiles (usec): 00:19:29.699 | 1.00th=[ 734], 5.00th=[ 766], 10.00th=[ 783], 20.00th=[ 807], 00:19:29.699 | 30.00th=[ 824], 40.00th=[ 840], 50.00th=[ 848], 60.00th=[ 865], 00:19:29.699 | 70.00th=[ 889], 80.00th=[ 906], 90.00th=[ 938], 95.00th=[ 963], 00:19:29.699 | 99.00th=[ 1037], 99.50th=[ 1074], 99.90th=[ 1188], 99.95th=[ 1221], 00:19:29.699 | 99.99th=[ 1352] 00:19:29.699 bw ( KiB/s): min=16864, max=18048, per=50.01%, avg=17813.89, stdev=241.84, samples=19 00:19:29.699 iops : min= 4216, max= 4512, avg=4453.47, stdev=60.46, samples=19 00:19:29.699 lat (usec) : 750=2.14%, 1000=95.73% 00:19:29.699 lat (msec) : 2=2.12%, 10=0.01% 00:19:29.699 cpu : usr=89.30%, sys=9.20%, ctx=31, majf=0, minf=0 00:19:29.699 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.699 issued rwts: total=44516,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.699 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:29.699 filename1: (groupid=0, jobs=1): err= 0: pid=81288: Sat Jul 13 08:05:34 2024 00:19:29.699 read: IOPS=4452, BW=17.4MiB/s (18.2MB/s)(174MiB/10001msec) 00:19:29.699 slat (nsec): min=7081, max=72352, avg=15545.16, stdev=5893.93 00:19:29.699 clat (usec): min=446, max=4285, avg=854.93, stdev=68.24 00:19:29.699 lat (usec): min=454, max=4328, avg=870.47, stdev=68.93 00:19:29.699 clat percentiles (usec): 00:19:29.699 | 1.00th=[ 750], 5.00th=[ 775], 10.00th=[ 783], 20.00th=[ 807], 00:19:29.699 | 30.00th=[ 824], 40.00th=[ 832], 50.00th=[ 848], 60.00th=[ 865], 00:19:29.699 | 70.00th=[ 881], 80.00th=[ 898], 90.00th=[ 930], 95.00th=[ 963], 00:19:29.699 | 99.00th=[ 1020], 99.50th=[ 1057], 99.90th=[ 1172], 99.95th=[ 1205], 00:19:29.699 | 99.99th=[ 1336] 00:19:29.699 bw ( KiB/s): min=16897, max=18048, per=50.03%, avg=17819.00, stdev=236.69, samples=19 00:19:29.699 iops : min= 4224, max= 4512, avg=4454.74, stdev=59.23, samples=19 00:19:29.699 lat (usec) : 500=0.04%, 750=0.93%, 1000=97.35% 00:19:29.699 lat (msec) : 2=1.68%, 10=0.01% 00:19:29.699 cpu : usr=89.63%, sys=8.87%, ctx=20, majf=0, minf=0 00:19:29.699 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:29.699 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.699 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:29.699 issued rwts: total=44532,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:29.699 latency : target=0, window=0, percentile=100.00%, depth=4 00:19:29.699 00:19:29.699 Run status group 0 (all jobs): 00:19:29.699 READ: bw=34.8MiB/s (36.5MB/s), 17.4MiB/s-17.4MiB/s (18.2MB/s-18.2MB/s), io=348MiB (365MB), run=10001-10001msec 00:19:29.699 08:05:34 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:19:29.699 08:05:34 -- target/dif.sh@43 -- # local sub 00:19:29.699 08:05:34 -- target/dif.sh@45 -- # for sub in "$@" 00:19:29.699 08:05:34 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:29.699 08:05:34 -- target/dif.sh@36 -- # local sub_id=0 00:19:29.699 08:05:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:29.699 08:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.699 08:05:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.699 08:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.699 08:05:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:29.699 08:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.699 08:05:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.699 08:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.699 08:05:34 -- target/dif.sh@45 -- # for sub in "$@" 00:19:29.699 08:05:34 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:29.699 08:05:34 -- target/dif.sh@36 -- # local sub_id=1 00:19:29.699 08:05:34 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:29.699 08:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.699 08:05:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.699 08:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.699 08:05:34 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:29.699 08:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.699 08:05:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.699 ************************************ 00:19:29.699 END TEST fio_dif_1_multi_subsystems 00:19:29.699 ************************************ 00:19:29.699 08:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.699 00:19:29.699 real 0m10.998s 00:19:29.699 user 0m18.550s 00:19:29.699 sys 0m2.049s 00:19:29.699 08:05:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:29.699 08:05:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.699 08:05:34 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:19:29.699 08:05:34 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:29.699 08:05:34 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:29.699 08:05:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.699 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:19:29.699 ************************************ 00:19:29.699 START TEST fio_dif_rand_params 00:19:29.699 ************************************ 00:19:29.699 08:05:34 -- common/autotest_common.sh@1104 -- # fio_dif_rand_params 00:19:29.699 08:05:34 -- target/dif.sh@100 -- # local NULL_DIF 00:19:29.699 08:05:34 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:19:29.699 08:05:34 -- target/dif.sh@103 -- # NULL_DIF=3 00:19:29.699 08:05:34 -- target/dif.sh@103 -- # bs=128k 00:19:29.699 08:05:34 -- target/dif.sh@103 -- # numjobs=3 00:19:29.699 08:05:34 -- target/dif.sh@103 -- # iodepth=3 00:19:29.699 08:05:34 -- target/dif.sh@103 -- # runtime=5 00:19:29.699 08:05:34 -- target/dif.sh@105 -- # create_subsystems 0 00:19:29.699 08:05:34 -- target/dif.sh@28 -- # local sub 00:19:29.699 08:05:34 -- target/dif.sh@30 -- # for sub in "$@" 00:19:29.699 08:05:34 -- target/dif.sh@31 -- # create_subsystem 0 00:19:29.699 08:05:34 -- target/dif.sh@18 -- # local sub_id=0 00:19:29.699 08:05:34 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:29.699 08:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.699 08:05:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.699 bdev_null0 00:19:29.699 08:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.699 08:05:34 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:29.699 08:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.699 08:05:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.699 08:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.699 08:05:34 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:29.699 08:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.699 08:05:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.699 08:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.699 08:05:34 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:29.699 08:05:34 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:29.699 08:05:34 -- common/autotest_common.sh@10 -- # set +x 00:19:29.699 [2024-07-13 08:05:34.734509] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:29.699 08:05:34 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:29.699 08:05:34 -- target/dif.sh@106 -- # fio /dev/fd/62 00:19:29.699 08:05:34 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:19:29.699 08:05:34 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:29.699 08:05:34 -- nvmf/common.sh@520 -- # config=() 00:19:29.699 08:05:34 -- nvmf/common.sh@520 -- # local subsystem config 00:19:29.699 08:05:34 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:29.699 08:05:34 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:29.699 { 00:19:29.699 "params": { 00:19:29.699 "name": "Nvme$subsystem", 00:19:29.699 "trtype": "$TEST_TRANSPORT", 00:19:29.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:29.699 "adrfam": "ipv4", 00:19:29.699 "trsvcid": "$NVMF_PORT", 00:19:29.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:29.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:29.699 "hdgst": ${hdgst:-false}, 00:19:29.699 "ddgst": ${ddgst:-false} 00:19:29.699 }, 00:19:29.699 "method": "bdev_nvme_attach_controller" 00:19:29.699 } 00:19:29.699 EOF 00:19:29.699 )") 00:19:29.699 08:05:34 -- target/dif.sh@82 -- # gen_fio_conf 00:19:29.699 08:05:34 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:29.699 08:05:34 -- target/dif.sh@54 -- # local file 00:19:29.699 08:05:34 -- target/dif.sh@56 -- # cat 00:19:29.699 08:05:34 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:29.699 08:05:34 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:19:29.699 08:05:34 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:29.699 08:05:34 -- nvmf/common.sh@542 -- # cat 00:19:29.699 08:05:34 -- common/autotest_common.sh@1318 -- # local sanitizers 00:19:29.699 08:05:34 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:29.699 08:05:34 -- common/autotest_common.sh@1320 -- # shift 00:19:29.699 08:05:34 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:19:29.699 08:05:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:29.700 08:05:34 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:29.700 08:05:34 -- target/dif.sh@72 -- # (( file <= files )) 00:19:29.700 08:05:34 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:29.700 08:05:34 -- common/autotest_common.sh@1324 -- # grep libasan 00:19:29.700 08:05:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:29.700 08:05:34 -- nvmf/common.sh@544 -- # jq . 00:19:29.700 08:05:34 -- nvmf/common.sh@545 -- # IFS=, 00:19:29.700 08:05:34 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:29.700 "params": { 00:19:29.700 "name": "Nvme0", 00:19:29.700 "trtype": "tcp", 00:19:29.700 "traddr": "10.0.0.2", 00:19:29.700 "adrfam": "ipv4", 00:19:29.700 "trsvcid": "4420", 00:19:29.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:29.700 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:29.700 "hdgst": false, 00:19:29.700 "ddgst": false 00:19:29.700 }, 00:19:29.700 "method": "bdev_nvme_attach_controller" 00:19:29.700 }' 00:19:29.700 08:05:34 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:29.700 08:05:34 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:29.700 08:05:34 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:29.700 08:05:34 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:29.700 08:05:34 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:19:29.700 08:05:34 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:29.700 08:05:34 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:29.700 08:05:34 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:29.700 08:05:34 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:29.700 08:05:34 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:29.700 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:29.700 ... 00:19:29.700 fio-3.35 00:19:29.700 Starting 3 threads 00:19:29.700 [2024-07-13 08:05:35.275167] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:29.700 [2024-07-13 08:05:35.275245] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:34.977 00:19:34.977 filename0: (groupid=0, jobs=1): err= 0: pid=81380: Sat Jul 13 08:05:40 2024 00:19:34.977 read: IOPS=242, BW=30.3MiB/s (31.8MB/s)(152MiB/5010msec) 00:19:34.977 slat (nsec): min=7464, max=90667, avg=16208.46, stdev=6548.98 00:19:34.977 clat (usec): min=11384, max=14884, avg=12333.76, stdev=471.82 00:19:34.977 lat (usec): min=11392, max=14909, avg=12349.97, stdev=472.22 00:19:34.977 clat percentiles (usec): 00:19:34.977 | 1.00th=[11469], 5.00th=[11600], 10.00th=[11731], 20.00th=[11863], 00:19:34.977 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:19:34.977 | 70.00th=[12649], 80.00th=[12780], 90.00th=[12911], 95.00th=[13042], 00:19:34.977 | 99.00th=[13304], 99.50th=[13304], 99.90th=[14877], 99.95th=[14877], 00:19:34.977 | 99.99th=[14877] 00:19:34.977 bw ( KiB/s): min=30658, max=31488, per=33.31%, avg=31021.00, stdev=402.37, samples=10 00:19:34.977 iops : min= 239, max= 246, avg=242.30, stdev= 3.20, samples=10 00:19:34.977 lat (msec) : 20=100.00% 00:19:34.977 cpu : usr=91.42%, sys=8.03%, ctx=3, majf=0, minf=9 00:19:34.977 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:34.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.977 issued rwts: total=1215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.977 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:34.977 filename0: (groupid=0, jobs=1): err= 0: pid=81381: Sat Jul 13 08:05:40 2024 00:19:34.977 read: IOPS=242, BW=30.3MiB/s (31.8MB/s)(152MiB/5008msec) 00:19:34.977 slat (nsec): min=8204, max=91589, avg=17142.60, stdev=6021.96 00:19:34.977 clat (usec): min=11423, max=13694, avg=12326.26, stdev=455.77 00:19:34.977 lat (usec): min=11437, max=13710, avg=12343.40, stdev=456.19 00:19:34.977 clat percentiles (usec): 00:19:34.977 | 1.00th=[11469], 5.00th=[11600], 10.00th=[11731], 20.00th=[11863], 00:19:34.977 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:19:34.977 | 70.00th=[12649], 80.00th=[12780], 90.00th=[12911], 95.00th=[13042], 00:19:34.977 | 99.00th=[13304], 99.50th=[13304], 99.90th=[13698], 99.95th=[13698], 00:19:34.977 | 99.99th=[13698] 00:19:34.977 bw ( KiB/s): min=30720, max=31488, per=33.32%, avg=31033.30, stdev=391.78, samples=10 00:19:34.977 iops : min= 240, max= 246, avg=242.40, stdev= 3.10, samples=10 00:19:34.977 lat (msec) : 20=100.00% 00:19:34.977 cpu : usr=91.85%, sys=7.59%, ctx=9, majf=0, minf=0 00:19:34.977 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:34.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.977 issued rwts: total=1215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.977 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:34.977 filename0: (groupid=0, jobs=1): err= 0: pid=81382: Sat Jul 13 08:05:40 2024 00:19:34.977 read: IOPS=242, BW=30.3MiB/s (31.8MB/s)(152MiB/5007msec) 00:19:34.977 slat (nsec): min=7725, max=90810, avg=17480.52, stdev=6367.85 00:19:34.977 clat (usec): min=9615, max=14801, avg=12321.73, stdev=488.78 00:19:34.977 lat (usec): min=9625, max=14825, avg=12339.21, stdev=489.12 00:19:34.977 clat percentiles (usec): 00:19:34.977 | 1.00th=[11469], 5.00th=[11600], 10.00th=[11731], 20.00th=[11863], 00:19:34.977 | 30.00th=[11994], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:19:34.977 | 70.00th=[12649], 80.00th=[12780], 90.00th=[12911], 95.00th=[13042], 00:19:34.977 | 99.00th=[13304], 99.50th=[13304], 99.90th=[14746], 99.95th=[14746], 00:19:34.977 | 99.99th=[14746] 00:19:34.977 bw ( KiB/s): min=30720, max=31488, per=33.33%, avg=31039.40, stdev=386.81, samples=10 00:19:34.977 iops : min= 240, max= 246, avg=242.40, stdev= 3.10, samples=10 00:19:34.977 lat (msec) : 10=0.25%, 20=99.75% 00:19:34.977 cpu : usr=91.71%, sys=7.69%, ctx=16, majf=0, minf=9 00:19:34.977 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:34.977 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.977 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.977 issued rwts: total=1215,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.977 latency : target=0, window=0, percentile=100.00%, depth=3 00:19:34.977 00:19:34.977 Run status group 0 (all jobs): 00:19:34.977 READ: bw=90.9MiB/s (95.4MB/s), 30.3MiB/s-30.3MiB/s (31.8MB/s-31.8MB/s), io=456MiB (478MB), run=5007-5010msec 00:19:34.977 08:05:40 -- target/dif.sh@107 -- # destroy_subsystems 0 00:19:34.977 08:05:40 -- target/dif.sh@43 -- # local sub 00:19:34.977 08:05:40 -- target/dif.sh@45 -- # for sub in "$@" 00:19:34.977 08:05:40 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:34.977 08:05:40 -- target/dif.sh@36 -- # local sub_id=0 00:19:34.977 08:05:40 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:34.977 08:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:34.977 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.977 08:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:34.977 08:05:40 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:34.977 08:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:34.977 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.977 08:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:34.977 08:05:40 -- target/dif.sh@109 -- # NULL_DIF=2 00:19:34.977 08:05:40 -- target/dif.sh@109 -- # bs=4k 00:19:34.977 08:05:40 -- target/dif.sh@109 -- # numjobs=8 00:19:34.977 08:05:40 -- target/dif.sh@109 -- # iodepth=16 00:19:34.977 08:05:40 -- target/dif.sh@109 -- # runtime= 00:19:34.977 08:05:40 -- target/dif.sh@109 -- # files=2 00:19:34.977 08:05:40 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:19:34.977 08:05:40 -- target/dif.sh@28 -- # local sub 00:19:34.977 08:05:40 -- target/dif.sh@30 -- # for sub in "$@" 00:19:34.977 08:05:40 -- target/dif.sh@31 -- # create_subsystem 0 00:19:34.977 08:05:40 -- target/dif.sh@18 -- # local sub_id=0 00:19:34.977 08:05:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:19:34.977 08:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:34.978 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.978 bdev_null0 00:19:34.978 08:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:34.978 08:05:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:34.978 08:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:34.978 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.978 08:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:34.978 08:05:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:34.978 08:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:34.978 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.978 08:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:34.978 08:05:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:34.978 08:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:34.978 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.978 [2024-07-13 08:05:40.596898] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:34.978 08:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:34.978 08:05:40 -- target/dif.sh@30 -- # for sub in "$@" 00:19:34.978 08:05:40 -- target/dif.sh@31 -- # create_subsystem 1 00:19:34.978 08:05:40 -- target/dif.sh@18 -- # local sub_id=1 00:19:34.978 08:05:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:19:34.978 08:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:34.978 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.978 bdev_null1 00:19:34.978 08:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:34.978 08:05:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:34.978 08:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:34.978 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.978 08:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:34.978 08:05:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:34.978 08:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:34.978 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.978 08:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:34.978 08:05:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:34.978 08:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:34.978 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.978 08:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:34.978 08:05:40 -- target/dif.sh@30 -- # for sub in "$@" 00:19:34.978 08:05:40 -- target/dif.sh@31 -- # create_subsystem 2 00:19:34.978 08:05:40 -- target/dif.sh@18 -- # local sub_id=2 00:19:34.978 08:05:40 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:19:34.978 08:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:34.978 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.978 bdev_null2 00:19:34.978 08:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:34.978 08:05:40 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:19:34.978 08:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:34.978 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.978 08:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:34.978 08:05:40 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:19:34.978 08:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:34.978 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.978 08:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:34.978 08:05:40 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:34.978 08:05:40 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:34.978 08:05:40 -- common/autotest_common.sh@10 -- # set +x 00:19:34.978 08:05:40 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:34.978 08:05:40 -- target/dif.sh@112 -- # fio /dev/fd/62 00:19:34.978 08:05:40 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:19:34.978 08:05:40 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:19:34.978 08:05:40 -- nvmf/common.sh@520 -- # config=() 00:19:34.978 08:05:40 -- nvmf/common.sh@520 -- # local subsystem config 00:19:34.978 08:05:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:34.978 08:05:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:34.978 { 00:19:34.978 "params": { 00:19:34.978 "name": "Nvme$subsystem", 00:19:34.978 "trtype": "$TEST_TRANSPORT", 00:19:34.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.978 "adrfam": "ipv4", 00:19:34.978 "trsvcid": "$NVMF_PORT", 00:19:34.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.978 "hdgst": ${hdgst:-false}, 00:19:34.978 "ddgst": ${ddgst:-false} 00:19:34.978 }, 00:19:34.978 "method": "bdev_nvme_attach_controller" 00:19:34.978 } 00:19:34.978 EOF 00:19:34.978 )") 00:19:34.978 08:05:40 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:34.978 08:05:40 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:34.978 08:05:40 -- target/dif.sh@82 -- # gen_fio_conf 00:19:34.978 08:05:40 -- target/dif.sh@54 -- # local file 00:19:34.978 08:05:40 -- target/dif.sh@56 -- # cat 00:19:34.978 08:05:40 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:19:34.978 08:05:40 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:34.978 08:05:40 -- common/autotest_common.sh@1318 -- # local sanitizers 00:19:34.978 08:05:40 -- nvmf/common.sh@542 -- # cat 00:19:34.978 08:05:40 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:34.978 08:05:40 -- common/autotest_common.sh@1320 -- # shift 00:19:34.978 08:05:40 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:19:34.978 08:05:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:34.978 08:05:40 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:34.978 08:05:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:34.978 08:05:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:34.978 { 00:19:34.978 "params": { 00:19:34.978 "name": "Nvme$subsystem", 00:19:34.978 "trtype": "$TEST_TRANSPORT", 00:19:34.978 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.978 "adrfam": "ipv4", 00:19:34.978 "trsvcid": "$NVMF_PORT", 00:19:34.978 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.978 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.978 "hdgst": ${hdgst:-false}, 00:19:34.978 "ddgst": ${ddgst:-false} 00:19:34.978 }, 00:19:34.978 "method": "bdev_nvme_attach_controller" 00:19:34.978 } 00:19:34.978 EOF 00:19:34.978 )") 00:19:34.978 08:05:40 -- target/dif.sh@72 -- # (( file <= files )) 00:19:34.978 08:05:40 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:34.978 08:05:40 -- target/dif.sh@73 -- # cat 00:19:34.978 08:05:40 -- nvmf/common.sh@542 -- # cat 00:19:34.978 08:05:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:34.978 08:05:40 -- common/autotest_common.sh@1324 -- # grep libasan 00:19:34.978 08:05:40 -- target/dif.sh@72 -- # (( file++ )) 00:19:34.978 08:05:40 -- target/dif.sh@72 -- # (( file <= files )) 00:19:34.978 08:05:40 -- target/dif.sh@73 -- # cat 00:19:34.979 08:05:40 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:34.979 08:05:40 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:34.979 { 00:19:34.979 "params": { 00:19:34.979 "name": "Nvme$subsystem", 00:19:34.979 "trtype": "$TEST_TRANSPORT", 00:19:34.979 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:34.979 "adrfam": "ipv4", 00:19:34.979 "trsvcid": "$NVMF_PORT", 00:19:34.979 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:34.979 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:34.979 "hdgst": ${hdgst:-false}, 00:19:34.979 "ddgst": ${ddgst:-false} 00:19:34.979 }, 00:19:34.979 "method": "bdev_nvme_attach_controller" 00:19:34.979 } 00:19:34.979 EOF 00:19:34.979 )") 00:19:34.979 08:05:40 -- nvmf/common.sh@542 -- # cat 00:19:34.979 08:05:40 -- target/dif.sh@72 -- # (( file++ )) 00:19:34.979 08:05:40 -- target/dif.sh@72 -- # (( file <= files )) 00:19:34.979 08:05:40 -- nvmf/common.sh@544 -- # jq . 00:19:34.979 08:05:40 -- nvmf/common.sh@545 -- # IFS=, 00:19:34.979 08:05:40 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:34.979 "params": { 00:19:34.979 "name": "Nvme0", 00:19:34.979 "trtype": "tcp", 00:19:34.979 "traddr": "10.0.0.2", 00:19:34.979 "adrfam": "ipv4", 00:19:34.979 "trsvcid": "4420", 00:19:34.979 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:34.979 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:34.979 "hdgst": false, 00:19:34.979 "ddgst": false 00:19:34.979 }, 00:19:34.979 "method": "bdev_nvme_attach_controller" 00:19:34.979 },{ 00:19:34.979 "params": { 00:19:34.979 "name": "Nvme1", 00:19:34.979 "trtype": "tcp", 00:19:34.979 "traddr": "10.0.0.2", 00:19:34.979 "adrfam": "ipv4", 00:19:34.979 "trsvcid": "4420", 00:19:34.979 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:34.979 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:34.979 "hdgst": false, 00:19:34.979 "ddgst": false 00:19:34.979 }, 00:19:34.979 "method": "bdev_nvme_attach_controller" 00:19:34.979 },{ 00:19:34.979 "params": { 00:19:34.979 "name": "Nvme2", 00:19:34.979 "trtype": "tcp", 00:19:34.979 "traddr": "10.0.0.2", 00:19:34.979 "adrfam": "ipv4", 00:19:34.979 "trsvcid": "4420", 00:19:34.979 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:34.979 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:34.979 "hdgst": false, 00:19:34.979 "ddgst": false 00:19:34.979 }, 00:19:34.979 "method": "bdev_nvme_attach_controller" 00:19:34.979 }' 00:19:34.979 08:05:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:34.979 08:05:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:34.979 08:05:40 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:34.979 08:05:40 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:19:34.979 08:05:40 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:34.979 08:05:40 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:34.979 08:05:40 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:34.979 08:05:40 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:34.979 08:05:40 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:34.979 08:05:40 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:35.268 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:35.268 ... 00:19:35.268 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:35.268 ... 00:19:35.268 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:19:35.268 ... 00:19:35.268 fio-3.35 00:19:35.268 Starting 24 threads 00:19:35.835 [2024-07-13 08:05:41.347306] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:35.836 [2024-07-13 08:05:41.347401] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:45.813 00:19:45.813 filename0: (groupid=0, jobs=1): err= 0: pid=81443: Sat Jul 13 08:05:51 2024 00:19:45.813 read: IOPS=222, BW=888KiB/s (910kB/s)(8908KiB/10027msec) 00:19:45.813 slat (usec): min=4, max=5030, avg=17.15, stdev=106.37 00:19:45.813 clat (msec): min=35, max=122, avg=71.86, stdev=19.41 00:19:45.813 lat (msec): min=35, max=122, avg=71.88, stdev=19.41 00:19:45.813 clat percentiles (msec): 00:19:45.813 | 1.00th=[ 38], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 51], 00:19:45.813 | 30.00th=[ 61], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 72], 00:19:45.813 | 70.00th=[ 81], 80.00th=[ 94], 90.00th=[ 104], 95.00th=[ 108], 00:19:45.813 | 99.00th=[ 113], 99.50th=[ 118], 99.90th=[ 121], 99.95th=[ 123], 00:19:45.813 | 99.99th=[ 123] 00:19:45.813 bw ( KiB/s): min= 640, max= 1040, per=4.14%, avg=886.40, stdev=135.23, samples=20 00:19:45.813 iops : min= 160, max= 260, avg=221.55, stdev=33.79, samples=20 00:19:45.813 lat (msec) : 50=20.34%, 100=68.79%, 250=10.87% 00:19:45.813 cpu : usr=32.99%, sys=2.01%, ctx=965, majf=0, minf=9 00:19:45.813 IO depths : 1=0.1%, 2=0.9%, 4=3.7%, 8=79.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:45.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.813 complete : 0=0.0%, 4=88.2%, 8=11.0%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.813 issued rwts: total=2227,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.813 filename0: (groupid=0, jobs=1): err= 0: pid=81444: Sat Jul 13 08:05:51 2024 00:19:45.813 read: IOPS=223, BW=895KiB/s (917kB/s)(8972KiB/10020msec) 00:19:45.813 slat (usec): min=3, max=8033, avg=31.71, stdev=348.56 00:19:45.813 clat (msec): min=34, max=129, avg=71.30, stdev=19.58 00:19:45.813 lat (msec): min=34, max=129, avg=71.33, stdev=19.58 00:19:45.813 clat percentiles (msec): 00:19:45.813 | 1.00th=[ 37], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 51], 00:19:45.813 | 30.00th=[ 61], 40.00th=[ 64], 50.00th=[ 72], 60.00th=[ 72], 00:19:45.813 | 70.00th=[ 75], 80.00th=[ 87], 90.00th=[ 105], 95.00th=[ 108], 00:19:45.813 | 99.00th=[ 120], 99.50th=[ 130], 99.90th=[ 130], 99.95th=[ 130], 00:19:45.813 | 99.99th=[ 130] 00:19:45.813 bw ( KiB/s): min= 640, max= 1048, per=4.17%, avg=892.95, stdev=127.96, samples=20 00:19:45.813 iops : min= 160, max= 262, avg=223.20, stdev=31.96, samples=20 00:19:45.813 lat (msec) : 50=19.75%, 100=69.91%, 250=10.34% 00:19:45.813 cpu : usr=32.22%, sys=1.76%, ctx=865, majf=0, minf=9 00:19:45.813 IO depths : 1=0.1%, 2=0.8%, 4=3.0%, 8=80.2%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:45.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.813 complete : 0=0.0%, 4=88.0%, 8=11.3%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.813 issued rwts: total=2243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.813 filename0: (groupid=0, jobs=1): err= 0: pid=81445: Sat Jul 13 08:05:51 2024 00:19:45.813 read: IOPS=212, BW=850KiB/s (871kB/s)(8520KiB/10022msec) 00:19:45.813 slat (usec): min=4, max=8040, avg=33.17, stdev=387.92 00:19:45.813 clat (msec): min=34, max=146, avg=75.09, stdev=22.36 00:19:45.813 lat (msec): min=34, max=146, avg=75.12, stdev=22.36 00:19:45.813 clat percentiles (msec): 00:19:45.813 | 1.00th=[ 38], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:19:45.813 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:19:45.813 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 111], 00:19:45.813 | 99.00th=[ 132], 99.50th=[ 146], 99.90th=[ 146], 99.95th=[ 146], 00:19:45.813 | 99.99th=[ 146] 00:19:45.813 bw ( KiB/s): min= 528, max= 1016, per=3.96%, avg=847.60, stdev=161.50, samples=20 00:19:45.813 iops : min= 132, max= 254, avg=211.90, stdev=40.37, samples=20 00:19:45.813 lat (msec) : 50=18.22%, 100=65.07%, 250=16.71% 00:19:45.813 cpu : usr=33.42%, sys=1.64%, ctx=976, majf=0, minf=9 00:19:45.813 IO depths : 1=0.1%, 2=1.6%, 4=6.2%, 8=76.4%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:45.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.813 complete : 0=0.0%, 4=89.2%, 8=9.4%, 16=1.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.813 issued rwts: total=2130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.813 filename0: (groupid=0, jobs=1): err= 0: pid=81446: Sat Jul 13 08:05:51 2024 00:19:45.813 read: IOPS=222, BW=891KiB/s (912kB/s)(8928KiB/10025msec) 00:19:45.813 slat (usec): min=4, max=8023, avg=20.02, stdev=189.59 00:19:45.813 clat (msec): min=22, max=143, avg=71.74, stdev=19.49 00:19:45.813 lat (msec): min=22, max=143, avg=71.76, stdev=19.49 00:19:45.813 clat percentiles (msec): 00:19:45.813 | 1.00th=[ 36], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 56], 00:19:45.813 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:19:45.813 | 70.00th=[ 78], 80.00th=[ 88], 90.00th=[ 104], 95.00th=[ 109], 00:19:45.813 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 133], 00:19:45.813 | 99.99th=[ 144] 00:19:45.813 bw ( KiB/s): min= 712, max= 1048, per=4.14%, avg=886.20, stdev=109.48, samples=20 00:19:45.813 iops : min= 178, max= 262, avg=221.50, stdev=27.31, samples=20 00:19:45.813 lat (msec) : 50=18.68%, 100=70.97%, 250=10.35% 00:19:45.813 cpu : usr=33.39%, sys=1.97%, ctx=916, majf=0, minf=9 00:19:45.813 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=82.3%, 16=16.6%, 32=0.0%, >=64=0.0% 00:19:45.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.813 complete : 0=0.0%, 4=87.7%, 8=12.1%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.813 issued rwts: total=2232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.813 filename0: (groupid=0, jobs=1): err= 0: pid=81447: Sat Jul 13 08:05:51 2024 00:19:45.813 read: IOPS=226, BW=907KiB/s (929kB/s)(9080KiB/10013msec) 00:19:45.813 slat (nsec): min=4531, max=39077, avg=14187.45, stdev=4752.25 00:19:45.813 clat (msec): min=18, max=128, avg=70.49, stdev=19.71 00:19:45.813 lat (msec): min=18, max=128, avg=70.51, stdev=19.71 00:19:45.813 clat percentiles (msec): 00:19:45.813 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 50], 00:19:45.813 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:19:45.813 | 70.00th=[ 78], 80.00th=[ 85], 90.00th=[ 105], 95.00th=[ 108], 00:19:45.813 | 99.00th=[ 111], 99.50th=[ 121], 99.90th=[ 127], 99.95th=[ 129], 00:19:45.813 | 99.99th=[ 129] 00:19:45.813 bw ( KiB/s): min= 688, max= 1040, per=4.22%, avg=903.60, stdev=111.77, samples=20 00:19:45.813 iops : min= 172, max= 260, avg=225.90, stdev=27.94, samples=20 00:19:45.813 lat (msec) : 20=0.31%, 50=21.19%, 100=68.37%, 250=10.13% 00:19:45.813 cpu : usr=31.52%, sys=1.66%, ctx=876, majf=0, minf=9 00:19:45.813 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.0%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:45.813 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.813 complete : 0=0.0%, 4=87.3%, 8=12.6%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.813 issued rwts: total=2270,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.813 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.814 filename0: (groupid=0, jobs=1): err= 0: pid=81448: Sat Jul 13 08:05:51 2024 00:19:45.814 read: IOPS=221, BW=885KiB/s (906kB/s)(8872KiB/10030msec) 00:19:45.814 slat (usec): min=7, max=8025, avg=17.76, stdev=170.16 00:19:45.814 clat (msec): min=10, max=131, avg=72.21, stdev=19.12 00:19:45.814 lat (msec): min=10, max=131, avg=72.23, stdev=19.12 00:19:45.814 clat percentiles (msec): 00:19:45.814 | 1.00th=[ 27], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 59], 00:19:45.814 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 72], 00:19:45.814 | 70.00th=[ 77], 80.00th=[ 90], 90.00th=[ 101], 95.00th=[ 108], 00:19:45.814 | 99.00th=[ 110], 99.50th=[ 113], 99.90th=[ 118], 99.95th=[ 132], 00:19:45.814 | 99.99th=[ 132] 00:19:45.814 bw ( KiB/s): min= 688, max= 1230, per=4.12%, avg=882.40, stdev=121.74, samples=20 00:19:45.814 iops : min= 172, max= 307, avg=220.55, stdev=30.35, samples=20 00:19:45.814 lat (msec) : 20=0.86%, 50=14.34%, 100=74.75%, 250=10.05% 00:19:45.814 cpu : usr=33.60%, sys=1.84%, ctx=908, majf=0, minf=9 00:19:45.814 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.4%, 16=17.0%, 32=0.0%, >=64=0.0% 00:19:45.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.814 complete : 0=0.0%, 4=87.8%, 8=12.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.814 issued rwts: total=2218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.814 filename0: (groupid=0, jobs=1): err= 0: pid=81449: Sat Jul 13 08:05:51 2024 00:19:45.814 read: IOPS=230, BW=920KiB/s (942kB/s)(9220KiB/10021msec) 00:19:45.814 slat (usec): min=4, max=5033, avg=23.25, stdev=189.28 00:19:45.814 clat (msec): min=19, max=142, avg=69.44, stdev=19.73 00:19:45.814 lat (msec): min=19, max=142, avg=69.47, stdev=19.73 00:19:45.814 clat percentiles (msec): 00:19:45.814 | 1.00th=[ 38], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 50], 00:19:45.814 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 69], 60.00th=[ 72], 00:19:45.814 | 70.00th=[ 77], 80.00th=[ 86], 90.00th=[ 101], 95.00th=[ 107], 00:19:45.814 | 99.00th=[ 114], 99.50th=[ 116], 99.90th=[ 127], 99.95th=[ 128], 00:19:45.814 | 99.99th=[ 144] 00:19:45.814 bw ( KiB/s): min= 712, max= 1072, per=4.28%, avg=915.40, stdev=123.48, samples=20 00:19:45.814 iops : min= 178, max= 268, avg=228.85, stdev=30.87, samples=20 00:19:45.814 lat (msec) : 20=0.04%, 50=21.52%, 100=68.68%, 250=9.76% 00:19:45.814 cpu : usr=41.29%, sys=2.15%, ctx=1565, majf=0, minf=9 00:19:45.814 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:19:45.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.814 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.814 issued rwts: total=2305,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.814 filename0: (groupid=0, jobs=1): err= 0: pid=81450: Sat Jul 13 08:05:51 2024 00:19:45.814 read: IOPS=219, BW=877KiB/s (898kB/s)(8796KiB/10029msec) 00:19:45.814 slat (usec): min=6, max=4021, avg=17.47, stdev=120.89 00:19:45.814 clat (msec): min=23, max=148, avg=72.87, stdev=19.04 00:19:45.814 lat (msec): min=23, max=148, avg=72.89, stdev=19.03 00:19:45.814 clat percentiles (msec): 00:19:45.814 | 1.00th=[ 40], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 56], 00:19:45.814 | 30.00th=[ 64], 40.00th=[ 68], 50.00th=[ 72], 60.00th=[ 73], 00:19:45.814 | 70.00th=[ 80], 80.00th=[ 91], 90.00th=[ 104], 95.00th=[ 108], 00:19:45.814 | 99.00th=[ 113], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:19:45.814 | 99.99th=[ 148] 00:19:45.814 bw ( KiB/s): min= 664, max= 1048, per=4.08%, avg=874.10, stdev=120.90, samples=20 00:19:45.814 iops : min= 166, max= 262, avg=218.50, stdev=30.21, samples=20 00:19:45.814 lat (msec) : 50=16.10%, 100=73.53%, 250=10.37% 00:19:45.814 cpu : usr=40.18%, sys=2.06%, ctx=1297, majf=0, minf=9 00:19:45.814 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.3%, 16=16.8%, 32=0.0%, >=64=0.0% 00:19:45.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.814 complete : 0=0.0%, 4=87.8%, 8=12.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.814 issued rwts: total=2199,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.814 filename1: (groupid=0, jobs=1): err= 0: pid=81451: Sat Jul 13 08:05:51 2024 00:19:45.814 read: IOPS=224, BW=897KiB/s (919kB/s)(9000KiB/10032msec) 00:19:45.814 slat (usec): min=3, max=4024, avg=20.89, stdev=158.54 00:19:45.814 clat (msec): min=10, max=144, avg=71.21, stdev=20.69 00:19:45.814 lat (msec): min=10, max=148, avg=71.23, stdev=20.70 00:19:45.814 clat percentiles (msec): 00:19:45.814 | 1.00th=[ 39], 5.00th=[ 44], 10.00th=[ 46], 20.00th=[ 52], 00:19:45.814 | 30.00th=[ 62], 40.00th=[ 65], 50.00th=[ 70], 60.00th=[ 73], 00:19:45.814 | 70.00th=[ 78], 80.00th=[ 88], 90.00th=[ 104], 95.00th=[ 107], 00:19:45.814 | 99.00th=[ 124], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:19:45.814 | 99.99th=[ 144] 00:19:45.814 bw ( KiB/s): min= 528, max= 1126, per=4.18%, avg=894.00, stdev=146.27, samples=20 00:19:45.814 iops : min= 132, max= 281, avg=223.45, stdev=36.51, samples=20 00:19:45.814 lat (msec) : 20=0.71%, 50=17.33%, 100=70.13%, 250=11.82% 00:19:45.814 cpu : usr=41.91%, sys=2.27%, ctx=1457, majf=0, minf=9 00:19:45.814 IO depths : 1=0.1%, 2=0.7%, 4=2.8%, 8=80.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:45.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.814 complete : 0=0.0%, 4=88.1%, 8=11.3%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.814 issued rwts: total=2250,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.814 filename1: (groupid=0, jobs=1): err= 0: pid=81452: Sat Jul 13 08:05:51 2024 00:19:45.814 read: IOPS=226, BW=906KiB/s (928kB/s)(9060KiB/10002msec) 00:19:45.814 slat (usec): min=4, max=8026, avg=18.63, stdev=168.42 00:19:45.814 clat (msec): min=2, max=143, avg=70.56, stdev=23.22 00:19:45.814 lat (msec): min=2, max=143, avg=70.58, stdev=23.22 00:19:45.814 clat percentiles (msec): 00:19:45.814 | 1.00th=[ 33], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 48], 00:19:45.814 | 30.00th=[ 56], 40.00th=[ 63], 50.00th=[ 69], 60.00th=[ 72], 00:19:45.814 | 70.00th=[ 77], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 112], 00:19:45.814 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 136], 99.95th=[ 144], 00:19:45.814 | 99.99th=[ 144] 00:19:45.814 bw ( KiB/s): min= 496, max= 1072, per=4.16%, avg=890.16, stdev=171.35, samples=19 00:19:45.814 iops : min= 124, max= 268, avg=222.53, stdev=42.84, samples=19 00:19:45.814 lat (msec) : 4=0.31%, 20=0.40%, 50=23.58%, 100=62.78%, 250=12.94% 00:19:45.814 cpu : usr=38.65%, sys=2.11%, ctx=1197, majf=0, minf=9 00:19:45.814 IO depths : 1=0.1%, 2=1.0%, 4=4.0%, 8=79.6%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:45.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.814 complete : 0=0.0%, 4=87.9%, 8=11.2%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.814 issued rwts: total=2265,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.814 filename1: (groupid=0, jobs=1): err= 0: pid=81453: Sat Jul 13 08:05:51 2024 00:19:45.814 read: IOPS=228, BW=915KiB/s (937kB/s)(9200KiB/10052msec) 00:19:45.814 slat (usec): min=7, max=4084, avg=18.78, stdev=145.72 00:19:45.814 clat (usec): min=1708, max=127897, avg=69775.66, stdev=23362.61 00:19:45.814 lat (usec): min=1725, max=127906, avg=69794.44, stdev=23358.06 00:19:45.814 clat percentiles (msec): 00:19:45.814 | 1.00th=[ 3], 5.00th=[ 36], 10.00th=[ 45], 20.00th=[ 52], 00:19:45.814 | 30.00th=[ 63], 40.00th=[ 67], 50.00th=[ 70], 60.00th=[ 73], 00:19:45.814 | 70.00th=[ 79], 80.00th=[ 90], 90.00th=[ 102], 95.00th=[ 107], 00:19:45.814 | 99.00th=[ 114], 99.50th=[ 117], 99.90th=[ 127], 99.95th=[ 127], 00:19:45.814 | 99.99th=[ 128] 00:19:45.814 bw ( KiB/s): min= 688, max= 1777, per=4.26%, avg=912.45, stdev=227.33, samples=20 00:19:45.814 iops : min= 172, max= 444, avg=228.10, stdev=56.78, samples=20 00:19:45.814 lat (msec) : 2=0.65%, 4=2.74%, 10=0.78%, 20=0.61%, 50=13.13% 00:19:45.814 lat (msec) : 100=70.61%, 250=11.48% 00:19:45.814 cpu : usr=42.62%, sys=1.99%, ctx=1718, majf=0, minf=9 00:19:45.814 IO depths : 1=0.3%, 2=0.9%, 4=2.7%, 8=79.8%, 16=16.3%, 32=0.0%, >=64=0.0% 00:19:45.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.814 complete : 0=0.0%, 4=88.4%, 8=11.0%, 16=0.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.814 issued rwts: total=2300,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.814 filename1: (groupid=0, jobs=1): err= 0: pid=81454: Sat Jul 13 08:05:51 2024 00:19:45.814 read: IOPS=212, BW=850KiB/s (870kB/s)(8520KiB/10026msec) 00:19:45.814 slat (usec): min=4, max=8039, avg=34.10, stdev=387.98 00:19:45.814 clat (msec): min=34, max=143, avg=75.09, stdev=20.87 00:19:45.814 lat (msec): min=34, max=143, avg=75.12, stdev=20.86 00:19:45.814 clat percentiles (msec): 00:19:45.814 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 48], 20.00th=[ 60], 00:19:45.814 | 30.00th=[ 62], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 74], 00:19:45.814 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 107], 95.00th=[ 108], 00:19:45.814 | 99.00th=[ 132], 99.50th=[ 144], 99.90th=[ 144], 99.95th=[ 144], 00:19:45.814 | 99.99th=[ 144] 00:19:45.814 bw ( KiB/s): min= 512, max= 1008, per=3.96%, avg=848.10, stdev=158.90, samples=20 00:19:45.814 iops : min= 128, max= 252, avg=212.00, stdev=39.71, samples=20 00:19:45.814 lat (msec) : 50=15.45%, 100=68.87%, 250=15.68% 00:19:45.814 cpu : usr=33.81%, sys=1.96%, ctx=907, majf=0, minf=9 00:19:45.814 IO depths : 1=0.1%, 2=1.8%, 4=7.1%, 8=75.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:19:45.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.814 complete : 0=0.0%, 4=89.4%, 8=9.0%, 16=1.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.814 issued rwts: total=2130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.814 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.814 filename1: (groupid=0, jobs=1): err= 0: pid=81455: Sat Jul 13 08:05:51 2024 00:19:45.814 read: IOPS=225, BW=904KiB/s (925kB/s)(9064KiB/10030msec) 00:19:45.814 slat (usec): min=7, max=9027, avg=21.73, stdev=221.18 00:19:45.814 clat (msec): min=10, max=131, avg=70.69, stdev=20.16 00:19:45.814 lat (msec): min=10, max=131, avg=70.71, stdev=20.16 00:19:45.814 clat percentiles (msec): 00:19:45.814 | 1.00th=[ 34], 5.00th=[ 44], 10.00th=[ 47], 20.00th=[ 50], 00:19:45.814 | 30.00th=[ 61], 40.00th=[ 65], 50.00th=[ 71], 60.00th=[ 72], 00:19:45.814 | 70.00th=[ 78], 80.00th=[ 88], 90.00th=[ 101], 95.00th=[ 108], 00:19:45.814 | 99.00th=[ 112], 99.50th=[ 121], 99.90th=[ 128], 99.95th=[ 132], 00:19:45.814 | 99.99th=[ 132] 00:19:45.814 bw ( KiB/s): min= 688, max= 1048, per=4.20%, avg=899.25, stdev=117.19, samples=20 00:19:45.814 iops : min= 172, max= 262, avg=224.75, stdev=29.24, samples=20 00:19:45.814 lat (msec) : 20=0.71%, 50=19.73%, 100=69.55%, 250=10.02% 00:19:45.814 cpu : usr=40.80%, sys=2.37%, ctx=1206, majf=0, minf=9 00:19:45.814 IO depths : 1=0.1%, 2=0.4%, 4=1.7%, 8=81.6%, 16=16.2%, 32=0.0%, >=64=0.0% 00:19:45.814 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.815 complete : 0=0.0%, 4=87.7%, 8=12.0%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.815 issued rwts: total=2266,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.815 filename1: (groupid=0, jobs=1): err= 0: pid=81456: Sat Jul 13 08:05:51 2024 00:19:45.815 read: IOPS=212, BW=852KiB/s (872kB/s)(8536KiB/10024msec) 00:19:45.815 slat (usec): min=6, max=4025, avg=16.87, stdev=122.86 00:19:45.815 clat (msec): min=23, max=141, avg=74.96, stdev=18.49 00:19:45.815 lat (msec): min=24, max=141, avg=74.98, stdev=18.49 00:19:45.815 clat percentiles (msec): 00:19:45.815 | 1.00th=[ 41], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 61], 00:19:45.815 | 30.00th=[ 65], 40.00th=[ 70], 50.00th=[ 72], 60.00th=[ 75], 00:19:45.815 | 70.00th=[ 84], 80.00th=[ 94], 90.00th=[ 103], 95.00th=[ 108], 00:19:45.815 | 99.00th=[ 122], 99.50th=[ 124], 99.90th=[ 142], 99.95th=[ 142], 00:19:45.815 | 99.99th=[ 142] 00:19:45.815 bw ( KiB/s): min= 664, max= 1048, per=3.97%, avg=849.50, stdev=110.29, samples=20 00:19:45.815 iops : min= 166, max= 262, avg=212.35, stdev=27.55, samples=20 00:19:45.815 lat (msec) : 50=10.03%, 100=79.10%, 250=10.87% 00:19:45.815 cpu : usr=39.85%, sys=2.36%, ctx=1086, majf=0, minf=9 00:19:45.815 IO depths : 1=0.1%, 2=0.9%, 4=3.8%, 8=78.9%, 16=16.4%, 32=0.0%, >=64=0.0% 00:19:45.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.815 complete : 0=0.0%, 4=88.7%, 8=10.5%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.815 issued rwts: total=2134,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.815 filename1: (groupid=0, jobs=1): err= 0: pid=81457: Sat Jul 13 08:05:51 2024 00:19:45.815 read: IOPS=232, BW=931KiB/s (954kB/s)(9324KiB/10010msec) 00:19:45.815 slat (usec): min=4, max=8035, avg=25.29, stdev=234.88 00:19:45.815 clat (msec): min=10, max=143, avg=68.59, stdev=20.70 00:19:45.815 lat (msec): min=10, max=143, avg=68.61, stdev=20.69 00:19:45.815 clat percentiles (msec): 00:19:45.815 | 1.00th=[ 35], 5.00th=[ 42], 10.00th=[ 46], 20.00th=[ 48], 00:19:45.815 | 30.00th=[ 57], 40.00th=[ 63], 50.00th=[ 67], 60.00th=[ 72], 00:19:45.815 | 70.00th=[ 74], 80.00th=[ 84], 90.00th=[ 102], 95.00th=[ 110], 00:19:45.815 | 99.00th=[ 121], 99.50th=[ 125], 99.90th=[ 142], 99.95th=[ 144], 00:19:45.815 | 99.99th=[ 144] 00:19:45.815 bw ( KiB/s): min= 632, max= 1075, per=4.32%, avg=925.80, stdev=130.52, samples=20 00:19:45.815 iops : min= 158, max= 268, avg=231.40, stdev=32.60, samples=20 00:19:45.815 lat (msec) : 20=0.26%, 50=23.68%, 100=65.68%, 250=10.38% 00:19:45.815 cpu : usr=42.39%, sys=2.27%, ctx=1187, majf=0, minf=9 00:19:45.815 IO depths : 1=0.1%, 2=0.5%, 4=1.8%, 8=82.2%, 16=15.5%, 32=0.0%, >=64=0.0% 00:19:45.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.815 complete : 0=0.0%, 4=87.2%, 8=12.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.815 issued rwts: total=2331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.815 filename1: (groupid=0, jobs=1): err= 0: pid=81458: Sat Jul 13 08:05:51 2024 00:19:45.815 read: IOPS=220, BW=882KiB/s (903kB/s)(8828KiB/10013msec) 00:19:45.815 slat (usec): min=5, max=10174, avg=26.91, stdev=323.82 00:19:45.815 clat (msec): min=15, max=135, avg=72.48, stdev=20.87 00:19:45.815 lat (msec): min=15, max=135, avg=72.50, stdev=20.87 00:19:45.815 clat percentiles (msec): 00:19:45.815 | 1.00th=[ 36], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 52], 00:19:45.815 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 72], 00:19:45.815 | 70.00th=[ 81], 80.00th=[ 95], 90.00th=[ 105], 95.00th=[ 110], 00:19:45.815 | 99.00th=[ 123], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 136], 00:19:45.815 | 99.99th=[ 136] 00:19:45.815 bw ( KiB/s): min= 624, max= 1024, per=4.09%, avg=876.00, stdev=139.67, samples=20 00:19:45.815 iops : min= 156, max= 256, avg=219.00, stdev=34.92, samples=20 00:19:45.815 lat (msec) : 20=0.32%, 50=18.80%, 100=68.42%, 250=12.46% 00:19:45.815 cpu : usr=34.12%, sys=2.08%, ctx=971, majf=0, minf=9 00:19:45.815 IO depths : 1=0.1%, 2=1.1%, 4=4.3%, 8=78.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:19:45.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.815 complete : 0=0.0%, 4=88.5%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.815 issued rwts: total=2207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.815 filename2: (groupid=0, jobs=1): err= 0: pid=81459: Sat Jul 13 08:05:51 2024 00:19:45.815 read: IOPS=223, BW=895KiB/s (916kB/s)(8972KiB/10030msec) 00:19:45.815 slat (usec): min=5, max=8027, avg=20.05, stdev=183.44 00:19:45.815 clat (msec): min=13, max=133, avg=71.43, stdev=19.39 00:19:45.815 lat (msec): min=13, max=133, avg=71.45, stdev=19.38 00:19:45.815 clat percentiles (msec): 00:19:45.815 | 1.00th=[ 26], 5.00th=[ 45], 10.00th=[ 47], 20.00th=[ 55], 00:19:45.815 | 30.00th=[ 62], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 73], 00:19:45.815 | 70.00th=[ 79], 80.00th=[ 88], 90.00th=[ 103], 95.00th=[ 108], 00:19:45.815 | 99.00th=[ 113], 99.50th=[ 117], 99.90th=[ 128], 99.95th=[ 131], 00:19:45.815 | 99.99th=[ 133] 00:19:45.815 bw ( KiB/s): min= 688, max= 1126, per=4.16%, avg=891.20, stdev=117.01, samples=20 00:19:45.815 iops : min= 172, max= 281, avg=222.75, stdev=29.18, samples=20 00:19:45.815 lat (msec) : 20=0.71%, 50=16.23%, 100=72.31%, 250=10.74% 00:19:45.815 cpu : usr=41.71%, sys=2.66%, ctx=1352, majf=0, minf=9 00:19:45.815 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=82.4%, 16=16.6%, 32=0.0%, >=64=0.0% 00:19:45.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.815 complete : 0=0.0%, 4=87.7%, 8=12.2%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.815 issued rwts: total=2243,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.815 filename2: (groupid=0, jobs=1): err= 0: pid=81460: Sat Jul 13 08:05:51 2024 00:19:45.815 read: IOPS=224, BW=897KiB/s (919kB/s)(9012KiB/10047msec) 00:19:45.815 slat (usec): min=4, max=8029, avg=23.93, stdev=292.31 00:19:45.815 clat (msec): min=3, max=131, avg=71.19, stdev=22.26 00:19:45.815 lat (msec): min=3, max=131, avg=71.21, stdev=22.27 00:19:45.815 clat percentiles (msec): 00:19:45.815 | 1.00th=[ 4], 5.00th=[ 39], 10.00th=[ 48], 20.00th=[ 59], 00:19:45.815 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 72], 60.00th=[ 72], 00:19:45.815 | 70.00th=[ 79], 80.00th=[ 93], 90.00th=[ 105], 95.00th=[ 108], 00:19:45.815 | 99.00th=[ 116], 99.50th=[ 118], 99.90th=[ 132], 99.95th=[ 132], 00:19:45.815 | 99.99th=[ 132] 00:19:45.815 bw ( KiB/s): min= 656, max= 1603, per=4.18%, avg=894.55, stdev=189.95, samples=20 00:19:45.815 iops : min= 164, max= 400, avg=223.60, stdev=47.34, samples=20 00:19:45.815 lat (msec) : 4=2.04%, 10=0.80%, 20=0.62%, 50=12.56%, 100=73.41% 00:19:45.815 lat (msec) : 250=10.56% 00:19:45.815 cpu : usr=31.68%, sys=1.64%, ctx=878, majf=0, minf=9 00:19:45.815 IO depths : 1=0.2%, 2=0.6%, 4=1.7%, 8=80.8%, 16=16.7%, 32=0.0%, >=64=0.0% 00:19:45.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.815 complete : 0=0.0%, 4=88.3%, 8=11.4%, 16=0.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.815 issued rwts: total=2253,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.815 filename2: (groupid=0, jobs=1): err= 0: pid=81461: Sat Jul 13 08:05:51 2024 00:19:45.815 read: IOPS=224, BW=898KiB/s (920kB/s)(8992KiB/10010msec) 00:19:45.815 slat (usec): min=4, max=8029, avg=21.94, stdev=239.01 00:19:45.815 clat (msec): min=11, max=142, avg=71.16, stdev=21.25 00:19:45.815 lat (msec): min=11, max=142, avg=71.18, stdev=21.25 00:19:45.815 clat percentiles (msec): 00:19:45.815 | 1.00th=[ 37], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 48], 00:19:45.815 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 71], 60.00th=[ 72], 00:19:45.815 | 70.00th=[ 78], 80.00th=[ 89], 90.00th=[ 105], 95.00th=[ 110], 00:19:45.815 | 99.00th=[ 130], 99.50th=[ 138], 99.90th=[ 138], 99.95th=[ 144], 00:19:45.815 | 99.99th=[ 144] 00:19:45.815 bw ( KiB/s): min= 528, max= 1048, per=4.17%, avg=892.60, stdev=143.96, samples=20 00:19:45.815 iops : min= 132, max= 262, avg=223.10, stdev=35.95, samples=20 00:19:45.815 lat (msec) : 20=0.31%, 50=21.26%, 100=65.21%, 250=13.21% 00:19:45.815 cpu : usr=34.05%, sys=1.83%, ctx=1010, majf=0, minf=9 00:19:45.815 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=80.1%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:45.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.815 complete : 0=0.0%, 4=88.0%, 8=11.3%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.815 issued rwts: total=2248,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.815 filename2: (groupid=0, jobs=1): err= 0: pid=81462: Sat Jul 13 08:05:51 2024 00:19:45.815 read: IOPS=232, BW=929KiB/s (951kB/s)(9300KiB/10010msec) 00:19:45.815 slat (usec): min=3, max=9025, avg=45.21, stdev=423.31 00:19:45.815 clat (msec): min=10, max=128, avg=68.72, stdev=19.69 00:19:45.815 lat (msec): min=10, max=128, avg=68.77, stdev=19.70 00:19:45.815 clat percentiles (msec): 00:19:45.815 | 1.00th=[ 35], 5.00th=[ 43], 10.00th=[ 46], 20.00th=[ 49], 00:19:45.815 | 30.00th=[ 58], 40.00th=[ 64], 50.00th=[ 68], 60.00th=[ 72], 00:19:45.815 | 70.00th=[ 75], 80.00th=[ 84], 90.00th=[ 101], 95.00th=[ 106], 00:19:45.815 | 99.00th=[ 113], 99.50th=[ 117], 99.90th=[ 121], 99.95th=[ 129], 00:19:45.815 | 99.99th=[ 129] 00:19:45.815 bw ( KiB/s): min= 696, max= 1043, per=4.31%, avg=923.35, stdev=111.18, samples=20 00:19:45.815 iops : min= 174, max= 260, avg=230.80, stdev=27.75, samples=20 00:19:45.815 lat (msec) : 20=0.26%, 50=22.71%, 100=67.05%, 250=9.98% 00:19:45.815 cpu : usr=42.10%, sys=2.29%, ctx=1186, majf=0, minf=9 00:19:45.815 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.4%, 16=16.0%, 32=0.0%, >=64=0.0% 00:19:45.815 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.815 complete : 0=0.0%, 4=87.0%, 8=12.9%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.815 issued rwts: total=2325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.815 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.815 filename2: (groupid=0, jobs=1): err= 0: pid=81463: Sat Jul 13 08:05:51 2024 00:19:45.815 read: IOPS=225, BW=900KiB/s (922kB/s)(9008KiB/10006msec) 00:19:45.815 slat (usec): min=4, max=4027, avg=21.60, stdev=168.93 00:19:45.815 clat (msec): min=14, max=139, avg=71.01, stdev=20.57 00:19:45.815 lat (msec): min=14, max=139, avg=71.03, stdev=20.57 00:19:45.815 clat percentiles (msec): 00:19:45.815 | 1.00th=[ 27], 5.00th=[ 42], 10.00th=[ 45], 20.00th=[ 51], 00:19:45.815 | 30.00th=[ 61], 40.00th=[ 66], 50.00th=[ 71], 60.00th=[ 74], 00:19:45.815 | 70.00th=[ 80], 80.00th=[ 90], 90.00th=[ 104], 95.00th=[ 108], 00:19:45.815 | 99.00th=[ 112], 99.50th=[ 121], 99.90th=[ 132], 99.95th=[ 132], 00:19:45.815 | 99.99th=[ 140] 00:19:45.815 bw ( KiB/s): min= 664, max= 1104, per=4.17%, avg=892.63, stdev=133.95, samples=19 00:19:45.815 iops : min= 166, max= 276, avg=223.16, stdev=33.49, samples=19 00:19:45.815 lat (msec) : 20=0.27%, 50=19.94%, 100=68.74%, 250=11.06% 00:19:45.815 cpu : usr=39.58%, sys=2.22%, ctx=1265, majf=0, minf=9 00:19:45.816 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=82.9%, 16=16.5%, 32=0.0%, >=64=0.0% 00:19:45.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.816 complete : 0=0.0%, 4=87.5%, 8=12.4%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.816 issued rwts: total=2252,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.816 filename2: (groupid=0, jobs=1): err= 0: pid=81464: Sat Jul 13 08:05:51 2024 00:19:45.816 read: IOPS=219, BW=878KiB/s (899kB/s)(8784KiB/10006msec) 00:19:45.816 slat (usec): min=3, max=8031, avg=34.89, stdev=330.80 00:19:45.816 clat (msec): min=6, max=144, avg=72.74, stdev=22.63 00:19:45.816 lat (msec): min=6, max=144, avg=72.78, stdev=22.64 00:19:45.816 clat percentiles (msec): 00:19:45.816 | 1.00th=[ 35], 5.00th=[ 42], 10.00th=[ 47], 20.00th=[ 49], 00:19:45.816 | 30.00th=[ 61], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 73], 00:19:45.816 | 70.00th=[ 82], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 109], 00:19:45.816 | 99.00th=[ 136], 99.50th=[ 136], 99.90th=[ 144], 99.95th=[ 144], 00:19:45.816 | 99.99th=[ 144] 00:19:45.816 bw ( KiB/s): min= 528, max= 1072, per=4.03%, avg=863.63, stdev=171.47, samples=19 00:19:45.816 iops : min= 132, max= 268, avg=215.89, stdev=42.87, samples=19 00:19:45.816 lat (msec) : 10=0.27%, 50=21.49%, 100=61.66%, 250=16.58% 00:19:45.816 cpu : usr=44.79%, sys=2.41%, ctx=1143, majf=0, minf=9 00:19:45.816 IO depths : 1=0.1%, 2=2.2%, 4=8.9%, 8=74.3%, 16=14.6%, 32=0.0%, >=64=0.0% 00:19:45.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.816 complete : 0=0.0%, 4=89.3%, 8=8.7%, 16=1.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.816 issued rwts: total=2196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.816 filename2: (groupid=0, jobs=1): err= 0: pid=81465: Sat Jul 13 08:05:51 2024 00:19:45.816 read: IOPS=229, BW=917KiB/s (939kB/s)(9180KiB/10011msec) 00:19:45.816 slat (usec): min=3, max=8031, avg=24.66, stdev=289.71 00:19:45.816 clat (msec): min=20, max=141, avg=69.64, stdev=20.14 00:19:45.816 lat (msec): min=20, max=141, avg=69.67, stdev=20.15 00:19:45.816 clat percentiles (msec): 00:19:45.816 | 1.00th=[ 36], 5.00th=[ 41], 10.00th=[ 47], 20.00th=[ 48], 00:19:45.816 | 30.00th=[ 60], 40.00th=[ 63], 50.00th=[ 70], 60.00th=[ 72], 00:19:45.816 | 70.00th=[ 74], 80.00th=[ 85], 90.00th=[ 100], 95.00th=[ 108], 00:19:45.816 | 99.00th=[ 117], 99.50th=[ 129], 99.90th=[ 129], 99.95th=[ 142], 00:19:45.816 | 99.99th=[ 142] 00:19:45.816 bw ( KiB/s): min= 641, max= 1072, per=4.27%, avg=913.60, stdev=130.57, samples=20 00:19:45.816 iops : min= 160, max= 268, avg=228.35, stdev=32.64, samples=20 00:19:45.816 lat (msec) : 50=22.92%, 100=67.23%, 250=9.85% 00:19:45.816 cpu : usr=34.13%, sys=1.78%, ctx=947, majf=0, minf=9 00:19:45.816 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=82.7%, 16=15.7%, 32=0.0%, >=64=0.0% 00:19:45.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.816 complete : 0=0.0%, 4=87.1%, 8=12.6%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.816 issued rwts: total=2295,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.816 filename2: (groupid=0, jobs=1): err= 0: pid=81466: Sat Jul 13 08:05:51 2024 00:19:45.816 read: IOPS=224, BW=900KiB/s (921kB/s)(9004KiB/10008msec) 00:19:45.816 slat (usec): min=4, max=8036, avg=17.80, stdev=169.15 00:19:45.816 clat (msec): min=11, max=144, avg=71.05, stdev=21.59 00:19:45.816 lat (msec): min=11, max=144, avg=71.07, stdev=21.59 00:19:45.816 clat percentiles (msec): 00:19:45.816 | 1.00th=[ 36], 5.00th=[ 45], 10.00th=[ 48], 20.00th=[ 48], 00:19:45.816 | 30.00th=[ 61], 40.00th=[ 63], 50.00th=[ 72], 60.00th=[ 72], 00:19:45.816 | 70.00th=[ 75], 80.00th=[ 95], 90.00th=[ 106], 95.00th=[ 108], 00:19:45.816 | 99.00th=[ 132], 99.50th=[ 132], 99.90th=[ 133], 99.95th=[ 144], 00:19:45.816 | 99.99th=[ 144] 00:19:45.816 bw ( KiB/s): min= 512, max= 1026, per=4.18%, avg=894.10, stdev=155.75, samples=20 00:19:45.816 iops : min= 128, max= 256, avg=223.50, stdev=38.92, samples=20 00:19:45.816 lat (msec) : 20=0.27%, 50=23.28%, 100=64.90%, 250=11.55% 00:19:45.816 cpu : usr=31.42%, sys=1.78%, ctx=892, majf=0, minf=9 00:19:45.816 IO depths : 1=0.1%, 2=1.1%, 4=4.4%, 8=79.0%, 16=15.4%, 32=0.0%, >=64=0.0% 00:19:45.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.816 complete : 0=0.0%, 4=88.1%, 8=10.9%, 16=1.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:45.816 issued rwts: total=2251,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:45.816 latency : target=0, window=0, percentile=100.00%, depth=16 00:19:45.816 00:19:45.816 Run status group 0 (all jobs): 00:19:45.816 READ: bw=20.9MiB/s (21.9MB/s), 850KiB/s-931KiB/s (870kB/s-954kB/s), io=210MiB (220MB), run=10002-10052msec 00:19:46.075 08:05:51 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:19:46.075 08:05:51 -- target/dif.sh@43 -- # local sub 00:19:46.075 08:05:51 -- target/dif.sh@45 -- # for sub in "$@" 00:19:46.075 08:05:51 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:46.075 08:05:51 -- target/dif.sh@36 -- # local sub_id=0 00:19:46.075 08:05:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:46.075 08:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.075 08:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:46.075 08:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.075 08:05:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:46.075 08:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.075 08:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:46.075 08:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.075 08:05:51 -- target/dif.sh@45 -- # for sub in "$@" 00:19:46.075 08:05:51 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:46.075 08:05:51 -- target/dif.sh@36 -- # local sub_id=1 00:19:46.075 08:05:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:46.075 08:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.075 08:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:46.075 08:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.075 08:05:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:46.075 08:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.075 08:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:46.075 08:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.075 08:05:51 -- target/dif.sh@45 -- # for sub in "$@" 00:19:46.075 08:05:51 -- target/dif.sh@46 -- # destroy_subsystem 2 00:19:46.075 08:05:51 -- target/dif.sh@36 -- # local sub_id=2 00:19:46.075 08:05:51 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:46.075 08:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.075 08:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:46.075 08:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.075 08:05:51 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:19:46.075 08:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.075 08:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:46.075 08:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.075 08:05:51 -- target/dif.sh@115 -- # NULL_DIF=1 00:19:46.075 08:05:51 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:19:46.075 08:05:51 -- target/dif.sh@115 -- # numjobs=2 00:19:46.075 08:05:51 -- target/dif.sh@115 -- # iodepth=8 00:19:46.075 08:05:51 -- target/dif.sh@115 -- # runtime=5 00:19:46.075 08:05:51 -- target/dif.sh@115 -- # files=1 00:19:46.075 08:05:51 -- target/dif.sh@117 -- # create_subsystems 0 1 00:19:46.075 08:05:51 -- target/dif.sh@28 -- # local sub 00:19:46.075 08:05:51 -- target/dif.sh@30 -- # for sub in "$@" 00:19:46.075 08:05:51 -- target/dif.sh@31 -- # create_subsystem 0 00:19:46.075 08:05:51 -- target/dif.sh@18 -- # local sub_id=0 00:19:46.075 08:05:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:19:46.075 08:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.075 08:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:46.075 bdev_null0 00:19:46.075 08:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.075 08:05:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:46.075 08:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.075 08:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:46.075 08:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.075 08:05:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:46.075 08:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.075 08:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:46.075 08:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.075 08:05:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:46.075 08:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.075 08:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:46.075 [2024-07-13 08:05:51.777331] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:46.075 08:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.075 08:05:51 -- target/dif.sh@30 -- # for sub in "$@" 00:19:46.075 08:05:51 -- target/dif.sh@31 -- # create_subsystem 1 00:19:46.075 08:05:51 -- target/dif.sh@18 -- # local sub_id=1 00:19:46.075 08:05:51 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:19:46.075 08:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.075 08:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:46.075 bdev_null1 00:19:46.075 08:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.075 08:05:51 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:19:46.075 08:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.075 08:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:46.075 08:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.075 08:05:51 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:19:46.075 08:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.075 08:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:46.075 08:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.075 08:05:51 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:46.075 08:05:51 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:46.075 08:05:51 -- common/autotest_common.sh@10 -- # set +x 00:19:46.075 08:05:51 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:46.075 08:05:51 -- target/dif.sh@118 -- # fio /dev/fd/62 00:19:46.075 08:05:51 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:19:46.075 08:05:51 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:19:46.075 08:05:51 -- nvmf/common.sh@520 -- # config=() 00:19:46.075 08:05:51 -- nvmf/common.sh@520 -- # local subsystem config 00:19:46.075 08:05:51 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:46.075 08:05:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:46.075 08:05:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:46.075 { 00:19:46.075 "params": { 00:19:46.075 "name": "Nvme$subsystem", 00:19:46.075 "trtype": "$TEST_TRANSPORT", 00:19:46.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.075 "adrfam": "ipv4", 00:19:46.075 "trsvcid": "$NVMF_PORT", 00:19:46.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.075 "hdgst": ${hdgst:-false}, 00:19:46.075 "ddgst": ${ddgst:-false} 00:19:46.075 }, 00:19:46.075 "method": "bdev_nvme_attach_controller" 00:19:46.075 } 00:19:46.075 EOF 00:19:46.075 )") 00:19:46.075 08:05:51 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:46.075 08:05:51 -- target/dif.sh@82 -- # gen_fio_conf 00:19:46.075 08:05:51 -- target/dif.sh@54 -- # local file 00:19:46.075 08:05:51 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:19:46.075 08:05:51 -- target/dif.sh@56 -- # cat 00:19:46.075 08:05:51 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:46.075 08:05:51 -- common/autotest_common.sh@1318 -- # local sanitizers 00:19:46.075 08:05:51 -- nvmf/common.sh@542 -- # cat 00:19:46.075 08:05:51 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.075 08:05:51 -- common/autotest_common.sh@1320 -- # shift 00:19:46.075 08:05:51 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:19:46.076 08:05:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:46.076 08:05:51 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:46.076 08:05:51 -- target/dif.sh@72 -- # (( file <= files )) 00:19:46.076 08:05:51 -- target/dif.sh@73 -- # cat 00:19:46.076 08:05:51 -- common/autotest_common.sh@1324 -- # grep libasan 00:19:46.076 08:05:51 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.076 08:05:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:46.076 08:05:51 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:46.076 08:05:51 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:46.076 { 00:19:46.076 "params": { 00:19:46.076 "name": "Nvme$subsystem", 00:19:46.076 "trtype": "$TEST_TRANSPORT", 00:19:46.076 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:46.076 "adrfam": "ipv4", 00:19:46.076 "trsvcid": "$NVMF_PORT", 00:19:46.076 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:46.076 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:46.076 "hdgst": ${hdgst:-false}, 00:19:46.076 "ddgst": ${ddgst:-false} 00:19:46.076 }, 00:19:46.076 "method": "bdev_nvme_attach_controller" 00:19:46.076 } 00:19:46.076 EOF 00:19:46.076 )") 00:19:46.076 08:05:51 -- target/dif.sh@72 -- # (( file++ )) 00:19:46.076 08:05:51 -- target/dif.sh@72 -- # (( file <= files )) 00:19:46.076 08:05:51 -- nvmf/common.sh@542 -- # cat 00:19:46.076 08:05:51 -- nvmf/common.sh@544 -- # jq . 00:19:46.076 08:05:51 -- nvmf/common.sh@545 -- # IFS=, 00:19:46.076 08:05:51 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:46.076 "params": { 00:19:46.076 "name": "Nvme0", 00:19:46.076 "trtype": "tcp", 00:19:46.076 "traddr": "10.0.0.2", 00:19:46.076 "adrfam": "ipv4", 00:19:46.076 "trsvcid": "4420", 00:19:46.076 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:46.076 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:46.076 "hdgst": false, 00:19:46.076 "ddgst": false 00:19:46.076 }, 00:19:46.076 "method": "bdev_nvme_attach_controller" 00:19:46.076 },{ 00:19:46.076 "params": { 00:19:46.076 "name": "Nvme1", 00:19:46.076 "trtype": "tcp", 00:19:46.076 "traddr": "10.0.0.2", 00:19:46.076 "adrfam": "ipv4", 00:19:46.076 "trsvcid": "4420", 00:19:46.076 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:46.076 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:46.076 "hdgst": false, 00:19:46.076 "ddgst": false 00:19:46.076 }, 00:19:46.076 "method": "bdev_nvme_attach_controller" 00:19:46.076 }' 00:19:46.076 08:05:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:46.076 08:05:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:46.076 08:05:51 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:46.076 08:05:51 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:46.076 08:05:51 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:19:46.076 08:05:51 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:46.076 08:05:51 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:46.076 08:05:51 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:46.076 08:05:51 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:46.076 08:05:51 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:46.335 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:46.335 ... 00:19:46.335 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:19:46.335 ... 00:19:46.335 fio-3.35 00:19:46.335 Starting 4 threads 00:19:46.902 [2024-07-13 08:05:52.413262] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:46.902 [2024-07-13 08:05:52.413992] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:19:52.160 00:19:52.160 filename0: (groupid=0, jobs=1): err= 0: pid=81556: Sat Jul 13 08:05:57 2024 00:19:52.160 read: IOPS=2143, BW=16.7MiB/s (17.6MB/s)(83.8MiB/5002msec) 00:19:52.160 slat (nsec): min=8195, max=65418, avg=15810.29, stdev=4656.93 00:19:52.160 clat (usec): min=1274, max=7713, avg=3690.20, stdev=1001.05 00:19:52.160 lat (usec): min=1289, max=7740, avg=3706.02, stdev=1000.92 00:19:52.160 clat percentiles (usec): 00:19:52.160 | 1.00th=[ 2008], 5.00th=[ 2057], 10.00th=[ 2147], 20.00th=[ 2704], 00:19:52.160 | 30.00th=[ 2999], 40.00th=[ 3130], 50.00th=[ 3916], 60.00th=[ 4080], 00:19:52.160 | 70.00th=[ 4555], 80.00th=[ 4817], 90.00th=[ 4883], 95.00th=[ 5014], 00:19:52.160 | 99.00th=[ 5145], 99.50th=[ 5145], 99.90th=[ 5276], 99.95th=[ 5538], 00:19:52.160 | 99.99th=[ 7111] 00:19:52.160 bw ( KiB/s): min=15744, max=17712, per=26.37%, avg=17114.67, stdev=677.50, samples=9 00:19:52.160 iops : min= 1968, max= 2214, avg=2139.33, stdev=84.69, samples=9 00:19:52.160 lat (msec) : 2=1.02%, 4=53.92%, 10=45.07% 00:19:52.160 cpu : usr=91.42%, sys=7.58%, ctx=145, majf=0, minf=9 00:19:52.160 IO depths : 1=0.1%, 2=5.0%, 4=61.0%, 8=34.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.160 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.160 complete : 0=0.0%, 4=98.2%, 8=1.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.160 issued rwts: total=10720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.160 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:52.160 filename0: (groupid=0, jobs=1): err= 0: pid=81557: Sat Jul 13 08:05:57 2024 00:19:52.160 read: IOPS=1990, BW=15.6MiB/s (16.3MB/s)(77.8MiB/5001msec) 00:19:52.160 slat (nsec): min=7797, max=62819, avg=15015.52, stdev=5275.05 00:19:52.160 clat (usec): min=1541, max=6861, avg=3972.37, stdev=1013.28 00:19:52.160 lat (usec): min=1556, max=6888, avg=3987.38, stdev=1011.62 00:19:52.160 clat percentiles (usec): 00:19:52.160 | 1.00th=[ 2008], 5.00th=[ 2089], 10.00th=[ 2376], 20.00th=[ 2933], 00:19:52.160 | 30.00th=[ 3130], 40.00th=[ 3949], 50.00th=[ 4113], 60.00th=[ 4621], 00:19:52.160 | 70.00th=[ 4817], 80.00th=[ 4948], 90.00th=[ 5014], 95.00th=[ 5080], 00:19:52.160 | 99.00th=[ 5276], 99.50th=[ 6063], 99.90th=[ 6325], 99.95th=[ 6390], 00:19:52.160 | 99.99th=[ 6849] 00:19:52.160 bw ( KiB/s): min=12672, max=17715, per=25.09%, avg=16281.22, stdev=1782.37, samples=9 00:19:52.161 iops : min= 1584, max= 2214, avg=2035.11, stdev=222.76, samples=9 00:19:52.161 lat (msec) : 2=0.57%, 4=43.40%, 10=56.03% 00:19:52.161 cpu : usr=91.24%, sys=7.86%, ctx=5, majf=0, minf=9 00:19:52.161 IO depths : 1=0.1%, 2=10.4%, 4=58.0%, 8=31.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.161 complete : 0=0.0%, 4=96.1%, 8=3.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.161 issued rwts: total=9956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.161 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:52.161 filename1: (groupid=0, jobs=1): err= 0: pid=81558: Sat Jul 13 08:05:57 2024 00:19:52.161 read: IOPS=2142, BW=16.7MiB/s (17.5MB/s)(83.7MiB/5002msec) 00:19:52.161 slat (nsec): min=6952, max=60743, avg=12944.86, stdev=5245.55 00:19:52.161 clat (usec): min=1235, max=7172, avg=3699.13, stdev=1015.60 00:19:52.161 lat (usec): min=1243, max=7191, avg=3712.07, stdev=1015.86 00:19:52.161 clat percentiles (usec): 00:19:52.161 | 1.00th=[ 1975], 5.00th=[ 2040], 10.00th=[ 2114], 20.00th=[ 2704], 00:19:52.161 | 30.00th=[ 2999], 40.00th=[ 3130], 50.00th=[ 3916], 60.00th=[ 4113], 00:19:52.161 | 70.00th=[ 4555], 80.00th=[ 4817], 90.00th=[ 4948], 95.00th=[ 5014], 00:19:52.161 | 99.00th=[ 5145], 99.50th=[ 5211], 99.90th=[ 5538], 99.95th=[ 5604], 00:19:52.161 | 99.99th=[ 7046] 00:19:52.161 bw ( KiB/s): min=15744, max=17664, per=26.37%, avg=17112.89, stdev=664.25, samples=9 00:19:52.161 iops : min= 1968, max= 2208, avg=2139.11, stdev=83.03, samples=9 00:19:52.161 lat (msec) : 2=2.59%, 4=51.90%, 10=45.51% 00:19:52.161 cpu : usr=92.24%, sys=6.84%, ctx=6, majf=0, minf=0 00:19:52.161 IO depths : 1=0.1%, 2=5.1%, 4=60.9%, 8=34.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.161 complete : 0=0.0%, 4=98.1%, 8=1.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.161 issued rwts: total=10715,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.161 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:52.161 filename1: (groupid=0, jobs=1): err= 0: pid=81559: Sat Jul 13 08:05:57 2024 00:19:52.161 read: IOPS=1835, BW=14.3MiB/s (15.0MB/s)(71.7MiB/5002msec) 00:19:52.161 slat (nsec): min=3689, max=59129, avg=13723.54, stdev=5467.95 00:19:52.161 clat (usec): min=800, max=7118, avg=4310.22, stdev=911.97 00:19:52.161 lat (usec): min=808, max=7134, avg=4323.94, stdev=910.09 00:19:52.161 clat percentiles (usec): 00:19:52.161 | 1.00th=[ 1991], 5.00th=[ 2147], 10.00th=[ 2933], 20.00th=[ 3785], 00:19:52.161 | 30.00th=[ 3982], 40.00th=[ 4228], 50.00th=[ 4817], 60.00th=[ 4883], 00:19:52.161 | 70.00th=[ 4948], 80.00th=[ 5014], 90.00th=[ 5080], 95.00th=[ 5145], 00:19:52.161 | 99.00th=[ 5342], 99.50th=[ 5538], 99.90th=[ 6128], 99.95th=[ 6259], 00:19:52.161 | 99.99th=[ 7111] 00:19:52.161 bw ( KiB/s): min=12544, max=17536, per=22.16%, avg=14378.67, stdev=2045.40, samples=9 00:19:52.161 iops : min= 1568, max= 2192, avg=1797.33, stdev=255.68, samples=9 00:19:52.161 lat (usec) : 1000=0.03% 00:19:52.161 lat (msec) : 2=1.09%, 4=29.97%, 10=68.90% 00:19:52.161 cpu : usr=91.84%, sys=7.30%, ctx=9, majf=0, minf=9 00:19:52.161 IO depths : 1=0.1%, 2=17.0%, 4=54.4%, 8=28.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:52.161 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.161 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:52.161 issued rwts: total=9181,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:52.161 latency : target=0, window=0, percentile=100.00%, depth=8 00:19:52.161 00:19:52.161 Run status group 0 (all jobs): 00:19:52.161 READ: bw=63.4MiB/s (66.4MB/s), 14.3MiB/s-16.7MiB/s (15.0MB/s-17.6MB/s), io=317MiB (332MB), run=5001-5002msec 00:19:52.161 08:05:57 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:19:52.161 08:05:57 -- target/dif.sh@43 -- # local sub 00:19:52.161 08:05:57 -- target/dif.sh@45 -- # for sub in "$@" 00:19:52.161 08:05:57 -- target/dif.sh@46 -- # destroy_subsystem 0 00:19:52.161 08:05:57 -- target/dif.sh@36 -- # local sub_id=0 00:19:52.161 08:05:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:52.161 08:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.161 08:05:57 -- common/autotest_common.sh@10 -- # set +x 00:19:52.161 08:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.161 08:05:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:19:52.161 08:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.161 08:05:57 -- common/autotest_common.sh@10 -- # set +x 00:19:52.161 08:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.161 08:05:57 -- target/dif.sh@45 -- # for sub in "$@" 00:19:52.161 08:05:57 -- target/dif.sh@46 -- # destroy_subsystem 1 00:19:52.161 08:05:57 -- target/dif.sh@36 -- # local sub_id=1 00:19:52.161 08:05:57 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:52.161 08:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.161 08:05:57 -- common/autotest_common.sh@10 -- # set +x 00:19:52.161 08:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.161 08:05:57 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:19:52.161 08:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.161 08:05:57 -- common/autotest_common.sh@10 -- # set +x 00:19:52.161 ************************************ 00:19:52.161 END TEST fio_dif_rand_params 00:19:52.161 ************************************ 00:19:52.161 08:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.161 00:19:52.161 real 0m23.022s 00:19:52.161 user 2m3.460s 00:19:52.161 sys 0m8.267s 00:19:52.161 08:05:57 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:52.161 08:05:57 -- common/autotest_common.sh@10 -- # set +x 00:19:52.161 08:05:57 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:19:52.161 08:05:57 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:19:52.161 08:05:57 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:19:52.161 08:05:57 -- common/autotest_common.sh@10 -- # set +x 00:19:52.161 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:19:52.161 ************************************ 00:19:52.161 START TEST fio_dif_digest 00:19:52.161 ************************************ 00:19:52.161 08:05:57 -- common/autotest_common.sh@1104 -- # fio_dif_digest 00:19:52.161 08:05:57 -- target/dif.sh@123 -- # local NULL_DIF 00:19:52.161 08:05:57 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:19:52.161 08:05:57 -- target/dif.sh@125 -- # local hdgst ddgst 00:19:52.161 08:05:57 -- target/dif.sh@127 -- # NULL_DIF=3 00:19:52.161 08:05:57 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:19:52.161 08:05:57 -- target/dif.sh@127 -- # numjobs=3 00:19:52.161 08:05:57 -- target/dif.sh@127 -- # iodepth=3 00:19:52.161 08:05:57 -- target/dif.sh@127 -- # runtime=10 00:19:52.161 08:05:57 -- target/dif.sh@128 -- # hdgst=true 00:19:52.161 08:05:57 -- target/dif.sh@128 -- # ddgst=true 00:19:52.161 08:05:57 -- target/dif.sh@130 -- # create_subsystems 0 00:19:52.161 08:05:57 -- target/dif.sh@28 -- # local sub 00:19:52.161 08:05:57 -- target/dif.sh@30 -- # for sub in "$@" 00:19:52.161 08:05:57 -- target/dif.sh@31 -- # create_subsystem 0 00:19:52.161 08:05:57 -- target/dif.sh@18 -- # local sub_id=0 00:19:52.161 08:05:57 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:19:52.161 08:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.161 08:05:57 -- common/autotest_common.sh@10 -- # set +x 00:19:52.161 bdev_null0 00:19:52.161 08:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.161 08:05:57 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:19:52.161 08:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.161 08:05:57 -- common/autotest_common.sh@10 -- # set +x 00:19:52.161 08:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.161 08:05:57 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:19:52.162 08:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.162 08:05:57 -- common/autotest_common.sh@10 -- # set +x 00:19:52.162 08:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.162 08:05:57 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:52.162 08:05:57 -- common/autotest_common.sh@551 -- # xtrace_disable 00:19:52.162 08:05:57 -- common/autotest_common.sh@10 -- # set +x 00:19:52.162 [2024-07-13 08:05:57.811934] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:52.162 08:05:57 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:19:52.162 08:05:57 -- target/dif.sh@131 -- # fio /dev/fd/62 00:19:52.162 08:05:57 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:19:52.162 08:05:57 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:19:52.162 08:05:57 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:52.162 08:05:57 -- common/autotest_common.sh@1335 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:52.162 08:05:57 -- target/dif.sh@82 -- # gen_fio_conf 00:19:52.162 08:05:57 -- common/autotest_common.sh@1316 -- # local fio_dir=/usr/src/fio 00:19:52.162 08:05:57 -- nvmf/common.sh@520 -- # config=() 00:19:52.162 08:05:57 -- common/autotest_common.sh@1318 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:52.162 08:05:57 -- target/dif.sh@54 -- # local file 00:19:52.162 08:05:57 -- nvmf/common.sh@520 -- # local subsystem config 00:19:52.162 08:05:57 -- target/dif.sh@56 -- # cat 00:19:52.162 08:05:57 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:52.162 08:05:57 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:52.162 { 00:19:52.162 "params": { 00:19:52.162 "name": "Nvme$subsystem", 00:19:52.162 "trtype": "$TEST_TRANSPORT", 00:19:52.162 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:52.162 "adrfam": "ipv4", 00:19:52.162 "trsvcid": "$NVMF_PORT", 00:19:52.162 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:52.162 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:52.162 "hdgst": ${hdgst:-false}, 00:19:52.162 "ddgst": ${ddgst:-false} 00:19:52.162 }, 00:19:52.162 "method": "bdev_nvme_attach_controller" 00:19:52.162 } 00:19:52.162 EOF 00:19:52.162 )") 00:19:52.162 08:05:57 -- common/autotest_common.sh@1318 -- # local sanitizers 00:19:52.162 08:05:57 -- common/autotest_common.sh@1319 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.162 08:05:57 -- common/autotest_common.sh@1320 -- # shift 00:19:52.162 08:05:57 -- common/autotest_common.sh@1322 -- # local asan_lib= 00:19:52.162 08:05:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:52.162 08:05:57 -- nvmf/common.sh@542 -- # cat 00:19:52.162 08:05:57 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.162 08:05:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:52.162 08:05:57 -- common/autotest_common.sh@1324 -- # grep libasan 00:19:52.162 08:05:57 -- target/dif.sh@72 -- # (( file = 1 )) 00:19:52.162 08:05:57 -- target/dif.sh@72 -- # (( file <= files )) 00:19:52.162 08:05:57 -- nvmf/common.sh@544 -- # jq . 00:19:52.162 08:05:57 -- nvmf/common.sh@545 -- # IFS=, 00:19:52.162 08:05:57 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:52.162 "params": { 00:19:52.162 "name": "Nvme0", 00:19:52.162 "trtype": "tcp", 00:19:52.162 "traddr": "10.0.0.2", 00:19:52.162 "adrfam": "ipv4", 00:19:52.162 "trsvcid": "4420", 00:19:52.162 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:19:52.162 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:19:52.162 "hdgst": true, 00:19:52.162 "ddgst": true 00:19:52.162 }, 00:19:52.162 "method": "bdev_nvme_attach_controller" 00:19:52.162 }' 00:19:52.162 08:05:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:52.162 08:05:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:52.162 08:05:57 -- common/autotest_common.sh@1323 -- # for sanitizer in "${sanitizers[@]}" 00:19:52.162 08:05:57 -- common/autotest_common.sh@1324 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:19:52.162 08:05:57 -- common/autotest_common.sh@1324 -- # grep libclang_rt.asan 00:19:52.162 08:05:57 -- common/autotest_common.sh@1324 -- # awk '{print $3}' 00:19:52.162 08:05:57 -- common/autotest_common.sh@1324 -- # asan_lib= 00:19:52.162 08:05:57 -- common/autotest_common.sh@1325 -- # [[ -n '' ]] 00:19:52.162 08:05:57 -- common/autotest_common.sh@1331 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:19:52.162 08:05:57 -- common/autotest_common.sh@1331 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:19:52.421 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:19:52.421 ... 00:19:52.421 fio-3.35 00:19:52.421 Starting 3 threads 00:19:52.680 [2024-07-13 08:05:58.336957] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:19:52.680 [2024-07-13 08:05:58.338035] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:02.658 00:20:02.658 filename0: (groupid=0, jobs=1): err= 0: pid=81629: Sat Jul 13 08:06:08 2024 00:20:02.658 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(266MiB/10010msec) 00:20:02.658 slat (nsec): min=7645, max=50494, avg=11280.83, stdev=4996.99 00:20:02.658 clat (usec): min=10329, max=22140, avg=14070.16, stdev=826.94 00:20:02.658 lat (usec): min=10338, max=22154, avg=14081.44, stdev=827.35 00:20:02.658 clat percentiles (usec): 00:20:02.658 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13173], 20.00th=[13304], 00:20:02.659 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14091], 60.00th=[14353], 00:20:02.659 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15008], 95.00th=[15139], 00:20:02.659 | 99.00th=[16319], 99.50th=[16450], 99.90th=[22152], 99.95th=[22152], 00:20:02.659 | 99.99th=[22152] 00:20:02.659 bw ( KiB/s): min=26112, max=29184, per=33.35%, avg=27240.84, stdev=738.97, samples=19 00:20:02.659 iops : min= 204, max= 228, avg=212.79, stdev= 5.76, samples=19 00:20:02.659 lat (msec) : 20=99.86%, 50=0.14% 00:20:02.659 cpu : usr=91.20%, sys=8.24%, ctx=18, majf=0, minf=0 00:20:02.659 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:02.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.659 issued rwts: total=2130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.659 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:02.659 filename0: (groupid=0, jobs=1): err= 0: pid=81630: Sat Jul 13 08:06:08 2024 00:20:02.659 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(266MiB/10007msec) 00:20:02.659 slat (nsec): min=7620, max=35589, avg=10970.61, stdev=4295.73 00:20:02.659 clat (usec): min=13035, max=22619, avg=14086.87, stdev=873.47 00:20:02.659 lat (usec): min=13044, max=22644, avg=14097.84, stdev=873.89 00:20:02.659 clat percentiles (usec): 00:20:02.659 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13173], 20.00th=[13304], 00:20:02.659 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14091], 60.00th=[14353], 00:20:02.659 | 70.00th=[14484], 80.00th=[14746], 90.00th=[14877], 95.00th=[15139], 00:20:02.659 | 99.00th=[16319], 99.50th=[16712], 99.90th=[22676], 99.95th=[22676], 00:20:02.659 | 99.99th=[22676] 00:20:02.659 bw ( KiB/s): min=26112, max=29184, per=33.31%, avg=27203.37, stdev=1001.87, samples=19 00:20:02.659 iops : min= 204, max= 228, avg=212.53, stdev= 7.83, samples=19 00:20:02.659 lat (msec) : 20=99.72%, 50=0.28% 00:20:02.659 cpu : usr=91.15%, sys=8.31%, ctx=5, majf=0, minf=9 00:20:02.659 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:02.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.659 issued rwts: total=2127,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.659 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:02.659 filename0: (groupid=0, jobs=1): err= 0: pid=81631: Sat Jul 13 08:06:08 2024 00:20:02.659 read: IOPS=212, BW=26.6MiB/s (27.9MB/s)(266MiB/10009msec) 00:20:02.659 slat (nsec): min=7597, max=44944, avg=10819.95, stdev=4205.26 00:20:02.659 clat (usec): min=10424, max=20985, avg=14069.88, stdev=818.25 00:20:02.659 lat (usec): min=10447, max=20997, avg=14080.70, stdev=818.60 00:20:02.659 clat percentiles (usec): 00:20:02.659 | 1.00th=[13042], 5.00th=[13042], 10.00th=[13173], 20.00th=[13304], 00:20:02.659 | 30.00th=[13435], 40.00th=[13698], 50.00th=[14091], 60.00th=[14353], 00:20:02.659 | 70.00th=[14484], 80.00th=[14746], 90.00th=[15008], 95.00th=[15270], 00:20:02.659 | 99.00th=[16450], 99.50th=[16712], 99.90th=[20841], 99.95th=[21103], 00:20:02.659 | 99.99th=[21103] 00:20:02.659 bw ( KiB/s): min=26112, max=29184, per=33.36%, avg=27243.63, stdev=860.65, samples=19 00:20:02.659 iops : min= 204, max= 228, avg=212.79, stdev= 6.72, samples=19 00:20:02.659 lat (msec) : 20=99.86%, 50=0.14% 00:20:02.659 cpu : usr=91.29%, sys=8.16%, ctx=7, majf=0, minf=9 00:20:02.659 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:02.659 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.659 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:02.659 issued rwts: total=2130,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:02.659 latency : target=0, window=0, percentile=100.00%, depth=3 00:20:02.659 00:20:02.659 Run status group 0 (all jobs): 00:20:02.659 READ: bw=79.8MiB/s (83.6MB/s), 26.6MiB/s-26.6MiB/s (27.9MB/s-27.9MB/s), io=798MiB (837MB), run=10007-10010msec 00:20:02.917 08:06:08 -- target/dif.sh@132 -- # destroy_subsystems 0 00:20:02.917 08:06:08 -- target/dif.sh@43 -- # local sub 00:20:02.917 08:06:08 -- target/dif.sh@45 -- # for sub in "$@" 00:20:02.917 08:06:08 -- target/dif.sh@46 -- # destroy_subsystem 0 00:20:02.917 08:06:08 -- target/dif.sh@36 -- # local sub_id=0 00:20:02.917 08:06:08 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:02.917 08:06:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.917 08:06:08 -- common/autotest_common.sh@10 -- # set +x 00:20:02.917 08:06:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.917 08:06:08 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:20:02.917 08:06:08 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:02.917 08:06:08 -- common/autotest_common.sh@10 -- # set +x 00:20:02.917 ************************************ 00:20:02.917 END TEST fio_dif_digest 00:20:02.917 ************************************ 00:20:02.917 08:06:08 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:02.917 00:20:02.917 real 0m10.856s 00:20:02.917 user 0m27.938s 00:20:02.917 sys 0m2.685s 00:20:02.917 08:06:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:02.917 08:06:08 -- common/autotest_common.sh@10 -- # set +x 00:20:02.917 08:06:08 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:20:02.917 08:06:08 -- target/dif.sh@147 -- # nvmftestfini 00:20:02.917 08:06:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:02.917 08:06:08 -- nvmf/common.sh@116 -- # sync 00:20:02.917 08:06:08 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:02.917 08:06:08 -- nvmf/common.sh@119 -- # set +e 00:20:02.917 08:06:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:02.917 08:06:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:02.917 rmmod nvme_tcp 00:20:03.176 rmmod nvme_fabrics 00:20:03.176 rmmod nvme_keyring 00:20:03.176 08:06:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:03.176 08:06:08 -- nvmf/common.sh@123 -- # set -e 00:20:03.176 08:06:08 -- nvmf/common.sh@124 -- # return 0 00:20:03.176 08:06:08 -- nvmf/common.sh@477 -- # '[' -n 81141 ']' 00:20:03.176 08:06:08 -- nvmf/common.sh@478 -- # killprocess 81141 00:20:03.176 08:06:08 -- common/autotest_common.sh@926 -- # '[' -z 81141 ']' 00:20:03.176 08:06:08 -- common/autotest_common.sh@930 -- # kill -0 81141 00:20:03.176 08:06:08 -- common/autotest_common.sh@931 -- # uname 00:20:03.176 08:06:08 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:03.176 08:06:08 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 81141 00:20:03.176 killing process with pid 81141 00:20:03.176 08:06:08 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:03.176 08:06:08 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:03.176 08:06:08 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 81141' 00:20:03.177 08:06:08 -- common/autotest_common.sh@945 -- # kill 81141 00:20:03.177 08:06:08 -- common/autotest_common.sh@950 -- # wait 81141 00:20:03.177 08:06:08 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:20:03.177 08:06:08 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:03.744 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:03.744 Waiting for block devices as requested 00:20:03.744 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:20:03.744 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:20:04.004 08:06:09 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:04.004 08:06:09 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:04.004 08:06:09 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:04.004 08:06:09 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:04.004 08:06:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.004 08:06:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:04.004 08:06:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.004 08:06:09 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:04.004 ************************************ 00:20:04.004 END TEST nvmf_dif 00:20:04.004 ************************************ 00:20:04.004 00:20:04.004 real 0m58.214s 00:20:04.004 user 3m45.484s 00:20:04.004 sys 0m19.428s 00:20:04.004 08:06:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:04.004 08:06:09 -- common/autotest_common.sh@10 -- # set +x 00:20:04.004 08:06:09 -- spdk/autotest.sh@301 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:04.004 08:06:09 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:04.004 08:06:09 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:04.004 08:06:09 -- common/autotest_common.sh@10 -- # set +x 00:20:04.004 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:20:04.004 ************************************ 00:20:04.004 START TEST nvmf_abort_qd_sizes 00:20:04.004 ************************************ 00:20:04.004 08:06:09 -- common/autotest_common.sh@1104 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:20:04.004 * Looking for test storage... 00:20:04.004 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:04.004 08:06:09 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:04.004 08:06:09 -- nvmf/common.sh@7 -- # uname -s 00:20:04.004 08:06:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:04.004 08:06:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:04.004 08:06:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:04.004 08:06:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:04.004 08:06:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:04.004 08:06:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:04.004 08:06:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:04.004 08:06:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:04.004 08:06:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:04.004 08:06:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:04.004 08:06:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 00:20:04.004 08:06:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=13d3a838-6067-4799-8998-c5cad9c1d570 00:20:04.004 08:06:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:04.004 08:06:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:04.004 08:06:09 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:04.004 08:06:09 -- nvmf/common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:04.004 08:06:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:04.004 08:06:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:04.004 08:06:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:04.004 08:06:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.004 08:06:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.005 08:06:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.005 08:06:09 -- paths/export.sh@5 -- # export PATH 00:20:04.005 08:06:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:04.005 08:06:09 -- nvmf/common.sh@46 -- # : 0 00:20:04.005 08:06:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:04.005 08:06:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:04.005 08:06:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:04.005 08:06:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:04.005 08:06:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:04.005 08:06:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:04.005 08:06:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:04.005 08:06:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:04.005 08:06:09 -- target/abort_qd_sizes.sh@73 -- # nvmftestinit 00:20:04.005 08:06:09 -- nvmf/common.sh@429 -- # '[' -z tcp ']' 00:20:04.005 08:06:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:04.005 08:06:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:04.005 08:06:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:04.005 08:06:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:04.005 08:06:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:04.005 08:06:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:04.005 08:06:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:04.005 08:06:09 -- nvmf/common.sh@402 -- # [[ virt != virt ]] 00:20:04.005 08:06:09 -- nvmf/common.sh@404 -- # [[ no == yes ]] 00:20:04.005 08:06:09 -- nvmf/common.sh@411 -- # [[ virt == phy ]] 00:20:04.005 08:06:09 -- nvmf/common.sh@414 -- # [[ virt == phy-fallback ]] 00:20:04.005 08:06:09 -- nvmf/common.sh@419 -- # [[ tcp == tcp ]] 00:20:04.005 08:06:09 -- nvmf/common.sh@420 -- # nvmf_veth_init 00:20:04.005 08:06:09 -- nvmf/common.sh@140 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:04.005 08:06:09 -- nvmf/common.sh@141 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:04.005 08:06:09 -- nvmf/common.sh@142 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:04.005 08:06:09 -- nvmf/common.sh@143 -- # NVMF_BRIDGE=nvmf_br 00:20:04.005 08:06:09 -- nvmf/common.sh@144 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:04.005 08:06:09 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:04.005 08:06:09 -- nvmf/common.sh@146 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:04.005 08:06:09 -- nvmf/common.sh@147 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:04.005 08:06:09 -- nvmf/common.sh@148 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:04.005 08:06:09 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:04.005 08:06:09 -- nvmf/common.sh@150 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:04.005 08:06:09 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:04.005 08:06:09 -- nvmf/common.sh@153 -- # ip link set nvmf_init_br nomaster 00:20:04.005 08:06:09 -- nvmf/common.sh@154 -- # ip link set nvmf_tgt_br nomaster 00:20:04.005 Cannot find device "nvmf_tgt_br" 00:20:04.005 08:06:09 -- nvmf/common.sh@154 -- # true 00:20:04.005 08:06:09 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br2 nomaster 00:20:04.005 Cannot find device "nvmf_tgt_br2" 00:20:04.005 08:06:09 -- nvmf/common.sh@155 -- # true 00:20:04.005 08:06:09 -- nvmf/common.sh@156 -- # ip link set nvmf_init_br down 00:20:04.005 08:06:09 -- nvmf/common.sh@157 -- # ip link set nvmf_tgt_br down 00:20:04.005 Cannot find device "nvmf_tgt_br" 00:20:04.005 08:06:09 -- nvmf/common.sh@157 -- # true 00:20:04.005 08:06:09 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br2 down 00:20:04.005 Cannot find device "nvmf_tgt_br2" 00:20:04.005 08:06:09 -- nvmf/common.sh@158 -- # true 00:20:04.005 08:06:09 -- nvmf/common.sh@159 -- # ip link delete nvmf_br type bridge 00:20:04.263 08:06:09 -- nvmf/common.sh@160 -- # ip link delete nvmf_init_if 00:20:04.263 08:06:09 -- nvmf/common.sh@161 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:04.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.263 08:06:09 -- nvmf/common.sh@161 -- # true 00:20:04.263 08:06:09 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:04.263 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:04.263 08:06:09 -- nvmf/common.sh@162 -- # true 00:20:04.263 08:06:09 -- nvmf/common.sh@165 -- # ip netns add nvmf_tgt_ns_spdk 00:20:04.263 08:06:09 -- nvmf/common.sh@168 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:04.263 08:06:09 -- nvmf/common.sh@169 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:04.263 08:06:09 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:04.263 08:06:09 -- nvmf/common.sh@173 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:04.263 08:06:09 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:04.263 08:06:09 -- nvmf/common.sh@177 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:04.263 08:06:09 -- nvmf/common.sh@178 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:04.263 08:06:09 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:04.263 08:06:09 -- nvmf/common.sh@182 -- # ip link set nvmf_init_if up 00:20:04.263 08:06:09 -- nvmf/common.sh@183 -- # ip link set nvmf_init_br up 00:20:04.263 08:06:09 -- nvmf/common.sh@184 -- # ip link set nvmf_tgt_br up 00:20:04.263 08:06:09 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br2 up 00:20:04.263 08:06:09 -- nvmf/common.sh@186 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:04.263 08:06:09 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:04.263 08:06:09 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:04.263 08:06:09 -- nvmf/common.sh@191 -- # ip link add nvmf_br type bridge 00:20:04.263 08:06:10 -- nvmf/common.sh@192 -- # ip link set nvmf_br up 00:20:04.263 08:06:10 -- nvmf/common.sh@195 -- # ip link set nvmf_init_br master nvmf_br 00:20:04.263 08:06:10 -- nvmf/common.sh@196 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:04.263 08:06:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:04.263 08:06:10 -- nvmf/common.sh@200 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:04.263 08:06:10 -- nvmf/common.sh@201 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:04.263 08:06:10 -- nvmf/common.sh@204 -- # ping -c 1 10.0.0.2 00:20:04.263 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:04.263 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.087 ms 00:20:04.263 00:20:04.263 --- 10.0.0.2 ping statistics --- 00:20:04.263 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.263 rtt min/avg/max/mdev = 0.087/0.087/0.087/0.000 ms 00:20:04.263 08:06:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.3 00:20:04.520 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:04.520 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:20:04.520 00:20:04.520 --- 10.0.0.3 ping statistics --- 00:20:04.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.520 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:20:04.520 08:06:10 -- nvmf/common.sh@206 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:04.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:04.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:20:04.520 00:20:04.520 --- 10.0.0.1 ping statistics --- 00:20:04.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:04.520 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:20:04.520 08:06:10 -- nvmf/common.sh@208 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:04.520 08:06:10 -- nvmf/common.sh@421 -- # return 0 00:20:04.520 08:06:10 -- nvmf/common.sh@438 -- # '[' iso == iso ']' 00:20:04.520 08:06:10 -- nvmf/common.sh@439 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:20:05.084 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:05.084 0000:00:06.0 (1b36 0010): nvme -> uio_pci_generic 00:20:05.342 0000:00:07.0 (1b36 0010): nvme -> uio_pci_generic 00:20:05.342 08:06:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.342 08:06:10 -- nvmf/common.sh@443 -- # [[ tcp == \r\d\m\a ]] 00:20:05.342 08:06:10 -- nvmf/common.sh@452 -- # [[ tcp == \t\c\p ]] 00:20:05.342 08:06:10 -- nvmf/common.sh@453 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.342 08:06:10 -- nvmf/common.sh@456 -- # '[' tcp == tcp ']' 00:20:05.342 08:06:10 -- nvmf/common.sh@462 -- # modprobe nvme-tcp 00:20:05.342 08:06:10 -- target/abort_qd_sizes.sh@74 -- # nvmfappstart -m 0xf 00:20:05.342 08:06:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:05.342 08:06:10 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:05.342 08:06:10 -- common/autotest_common.sh@10 -- # set +x 00:20:05.342 08:06:11 -- nvmf/common.sh@469 -- # nvmfpid=82147 00:20:05.342 08:06:11 -- nvmf/common.sh@468 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:20:05.342 08:06:11 -- nvmf/common.sh@470 -- # waitforlisten 82147 00:20:05.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:05.342 08:06:11 -- common/autotest_common.sh@819 -- # '[' -z 82147 ']' 00:20:05.342 08:06:11 -- common/autotest_common.sh@823 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.343 08:06:11 -- common/autotest_common.sh@824 -- # local max_retries=100 00:20:05.343 08:06:11 -- common/autotest_common.sh@826 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.343 08:06:11 -- common/autotest_common.sh@828 -- # xtrace_disable 00:20:05.343 08:06:11 -- common/autotest_common.sh@10 -- # set +x 00:20:05.343 [2024-07-13 08:06:11.056707] Starting SPDK v24.01.1-pre git sha1 4b94202c6 / DPDK 22.11.4 initialization... 00:20:05.343 [2024-07-13 08:06:11.057067] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:05.602 [2024-07-13 08:06:11.200896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:05.602 [2024-07-13 08:06:11.248996] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:05.602 [2024-07-13 08:06:11.249473] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:05.602 [2024-07-13 08:06:11.249510] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:05.602 [2024-07-13 08:06:11.249522] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:05.602 [2024-07-13 08:06:11.249684] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:05.602 [2024-07-13 08:06:11.250017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:05.602 [2024-07-13 08:06:11.250124] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:05.602 [2024-07-13 08:06:11.250131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.539 08:06:12 -- common/autotest_common.sh@848 -- # (( i == 0 )) 00:20:06.539 08:06:12 -- common/autotest_common.sh@852 -- # return 0 00:20:06.539 08:06:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:06.539 08:06:12 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:06.539 08:06:12 -- common/autotest_common.sh@10 -- # set +x 00:20:06.539 08:06:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.539 08:06:12 -- target/abort_qd_sizes.sh@76 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:20:06.539 08:06:12 -- target/abort_qd_sizes.sh@78 -- # mapfile -t nvmes 00:20:06.539 08:06:12 -- target/abort_qd_sizes.sh@78 -- # nvme_in_userspace 00:20:06.539 08:06:12 -- scripts/common.sh@311 -- # local bdf bdfs 00:20:06.539 08:06:12 -- scripts/common.sh@312 -- # local nvmes 00:20:06.539 08:06:12 -- scripts/common.sh@314 -- # [[ -n '' ]] 00:20:06.539 08:06:12 -- scripts/common.sh@317 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:20:06.539 08:06:12 -- scripts/common.sh@317 -- # iter_pci_class_code 01 08 02 00:20:06.539 08:06:12 -- scripts/common.sh@297 -- # local bdf= 00:20:06.539 08:06:12 -- scripts/common.sh@299 -- # iter_all_pci_class_code 01 08 02 00:20:06.539 08:06:12 -- scripts/common.sh@232 -- # local class 00:20:06.539 08:06:12 -- scripts/common.sh@233 -- # local subclass 00:20:06.539 08:06:12 -- scripts/common.sh@234 -- # local progif 00:20:06.539 08:06:12 -- scripts/common.sh@235 -- # printf %02x 1 00:20:06.539 08:06:12 -- scripts/common.sh@235 -- # class=01 00:20:06.539 08:06:12 -- scripts/common.sh@236 -- # printf %02x 8 00:20:06.539 08:06:12 -- scripts/common.sh@236 -- # subclass=08 00:20:06.540 08:06:12 -- scripts/common.sh@237 -- # printf %02x 2 00:20:06.540 08:06:12 -- scripts/common.sh@237 -- # progif=02 00:20:06.540 08:06:12 -- scripts/common.sh@239 -- # hash lspci 00:20:06.540 08:06:12 -- scripts/common.sh@240 -- # '[' 02 '!=' 00 ']' 00:20:06.540 08:06:12 -- scripts/common.sh@241 -- # lspci -mm -n -D 00:20:06.540 08:06:12 -- scripts/common.sh@242 -- # grep -i -- -p02 00:20:06.540 08:06:12 -- scripts/common.sh@243 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:20:06.540 08:06:12 -- scripts/common.sh@244 -- # tr -d '"' 00:20:06.540 08:06:12 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:06.540 08:06:12 -- scripts/common.sh@300 -- # pci_can_use 0000:00:06.0 00:20:06.540 08:06:12 -- scripts/common.sh@15 -- # local i 00:20:06.540 08:06:12 -- scripts/common.sh@18 -- # [[ =~ 0000:00:06.0 ]] 00:20:06.540 08:06:12 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:06.540 08:06:12 -- scripts/common.sh@24 -- # return 0 00:20:06.540 08:06:12 -- scripts/common.sh@301 -- # echo 0000:00:06.0 00:20:06.540 08:06:12 -- scripts/common.sh@299 -- # for bdf in $(iter_all_pci_class_code "$@") 00:20:06.540 08:06:12 -- scripts/common.sh@300 -- # pci_can_use 0000:00:07.0 00:20:06.540 08:06:12 -- scripts/common.sh@15 -- # local i 00:20:06.540 08:06:12 -- scripts/common.sh@18 -- # [[ =~ 0000:00:07.0 ]] 00:20:06.540 08:06:12 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:20:06.540 08:06:12 -- scripts/common.sh@24 -- # return 0 00:20:06.540 08:06:12 -- scripts/common.sh@301 -- # echo 0000:00:07.0 00:20:06.540 08:06:12 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:20:06.540 08:06:12 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:06.0 ]] 00:20:06.540 08:06:12 -- scripts/common.sh@322 -- # uname -s 00:20:06.540 08:06:12 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:20:06.540 08:06:12 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:20:06.540 08:06:12 -- scripts/common.sh@320 -- # for bdf in "${nvmes[@]}" 00:20:06.540 08:06:12 -- scripts/common.sh@321 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:07.0 ]] 00:20:06.540 08:06:12 -- scripts/common.sh@322 -- # uname -s 00:20:06.540 08:06:12 -- scripts/common.sh@322 -- # [[ Linux == FreeBSD ]] 00:20:06.540 08:06:12 -- scripts/common.sh@325 -- # bdfs+=("$bdf") 00:20:06.540 08:06:12 -- scripts/common.sh@327 -- # (( 2 )) 00:20:06.540 08:06:12 -- scripts/common.sh@328 -- # printf '%s\n' 0000:00:06.0 0000:00:07.0 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@79 -- # (( 2 > 0 )) 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@81 -- # nvme=0000:00:06.0 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@83 -- # run_test spdk_target_abort spdk_target 00:20:06.540 08:06:12 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:06.540 08:06:12 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:06.540 08:06:12 -- common/autotest_common.sh@10 -- # set +x 00:20:06.540 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:20:06.540 ************************************ 00:20:06.540 START TEST spdk_target_abort 00:20:06.540 ************************************ 00:20:06.540 08:06:12 -- common/autotest_common.sh@1104 -- # spdk_target 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@44 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@46 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:06.0 -b spdk_target 00:20:06.540 08:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.540 08:06:12 -- common/autotest_common.sh@10 -- # set +x 00:20:06.540 spdk_targetn1 00:20:06.540 08:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:06.540 08:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.540 08:06:12 -- common/autotest_common.sh@10 -- # set +x 00:20:06.540 [2024-07-13 08:06:12.192184] tcp.c: 659:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.540 08:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:spdk_target -a -s SPDKISFASTANDAWESOME 00:20:06.540 08:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.540 08:06:12 -- common/autotest_common.sh@10 -- # set +x 00:20:06.540 08:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:spdk_target spdk_targetn1 00:20:06.540 08:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.540 08:06:12 -- common/autotest_common.sh@10 -- # set +x 00:20:06.540 08:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@51 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:spdk_target -t tcp -a 10.0.0.2 -s 4420 00:20:06.540 08:06:12 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:06.540 08:06:12 -- common/autotest_common.sh@10 -- # set +x 00:20:06.540 [2024-07-13 08:06:12.220486] tcp.c: 951:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.540 08:06:12 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@53 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:spdk_target 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:spdk_target 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:06.540 08:06:12 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:09.821 Initializing NVMe Controllers 00:20:09.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:09.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:09.821 Initialization complete. Launching workers. 00:20:09.821 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 9962, failed: 0 00:20:09.822 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1096, failed to submit 8866 00:20:09.822 success 769, unsuccess 327, failed 0 00:20:09.822 08:06:15 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:09.822 08:06:15 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:13.108 Initializing NVMe Controllers 00:20:13.108 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:13.108 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:13.108 Initialization complete. Launching workers. 00:20:13.108 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 8896, failed: 0 00:20:13.108 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 1126, failed to submit 7770 00:20:13.108 success 365, unsuccess 761, failed 0 00:20:13.108 08:06:18 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:13.108 08:06:18 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:spdk_target' 00:20:16.412 Initializing NVMe Controllers 00:20:16.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:spdk_target 00:20:16.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 with lcore 0 00:20:16.412 Initialization complete. Launching workers. 00:20:16.412 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) NSID 1 I/O completed: 29989, failed: 0 00:20:16.412 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:spdk_target) abort submitted 2310, failed to submit 27679 00:20:16.412 success 535, unsuccess 1775, failed 0 00:20:16.412 08:06:22 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:spdk_target 00:20:16.412 08:06:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:16.412 08:06:22 -- common/autotest_common.sh@10 -- # set +x 00:20:16.412 08:06:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:16.412 08:06:22 -- target/abort_qd_sizes.sh@56 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:20:16.412 08:06:22 -- common/autotest_common.sh@551 -- # xtrace_disable 00:20:16.412 08:06:22 -- common/autotest_common.sh@10 -- # set +x 00:20:16.670 08:06:22 -- common/autotest_common.sh@579 -- # [[ 0 == 0 ]] 00:20:16.670 08:06:22 -- target/abort_qd_sizes.sh@62 -- # killprocess 82147 00:20:16.671 08:06:22 -- common/autotest_common.sh@926 -- # '[' -z 82147 ']' 00:20:16.671 08:06:22 -- common/autotest_common.sh@930 -- # kill -0 82147 00:20:16.671 08:06:22 -- common/autotest_common.sh@931 -- # uname 00:20:16.671 08:06:22 -- common/autotest_common.sh@931 -- # '[' Linux = Linux ']' 00:20:16.671 08:06:22 -- common/autotest_common.sh@932 -- # ps --no-headers -o comm= 82147 00:20:16.671 killing process with pid 82147 00:20:16.671 08:06:22 -- common/autotest_common.sh@932 -- # process_name=reactor_0 00:20:16.671 08:06:22 -- common/autotest_common.sh@936 -- # '[' reactor_0 = sudo ']' 00:20:16.671 08:06:22 -- common/autotest_common.sh@944 -- # echo 'killing process with pid 82147' 00:20:16.671 08:06:22 -- common/autotest_common.sh@945 -- # kill 82147 00:20:16.671 08:06:22 -- common/autotest_common.sh@950 -- # wait 82147 00:20:16.929 ************************************ 00:20:16.929 END TEST spdk_target_abort 00:20:16.929 ************************************ 00:20:16.929 00:20:16.929 real 0m10.382s 00:20:16.929 user 0m42.267s 00:20:16.929 sys 0m2.073s 00:20:16.929 08:06:22 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:16.929 08:06:22 -- common/autotest_common.sh@10 -- # set +x 00:20:16.929 08:06:22 -- target/abort_qd_sizes.sh@84 -- # run_test kernel_target_abort kernel_target 00:20:16.929 08:06:22 -- common/autotest_common.sh@1077 -- # '[' 2 -le 1 ']' 00:20:16.929 08:06:22 -- common/autotest_common.sh@1083 -- # xtrace_disable 00:20:16.929 08:06:22 -- common/autotest_common.sh@10 -- # set +x 00:20:16.929 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 1096: kill: (59682) - No such process 00:20:16.929 ************************************ 00:20:16.929 START TEST kernel_target_abort 00:20:16.929 ************************************ 00:20:16.929 08:06:22 -- common/autotest_common.sh@1104 -- # kernel_target 00:20:16.929 08:06:22 -- target/abort_qd_sizes.sh@66 -- # local name=kernel_target 00:20:16.929 08:06:22 -- target/abort_qd_sizes.sh@68 -- # configure_kernel_target kernel_target 00:20:16.929 08:06:22 -- nvmf/common.sh@621 -- # kernel_name=kernel_target 00:20:16.929 08:06:22 -- nvmf/common.sh@622 -- # nvmet=/sys/kernel/config/nvmet 00:20:16.929 08:06:22 -- nvmf/common.sh@623 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/kernel_target 00:20:16.929 08:06:22 -- nvmf/common.sh@624 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:16.929 08:06:22 -- nvmf/common.sh@625 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:20:16.929 08:06:22 -- nvmf/common.sh@627 -- # local block nvme 00:20:16.929 08:06:22 -- nvmf/common.sh@629 -- # [[ ! -e /sys/module/nvmet ]] 00:20:16.929 08:06:22 -- nvmf/common.sh@630 -- # modprobe nvmet 00:20:16.929 08:06:22 -- nvmf/common.sh@633 -- # [[ -e /sys/kernel/config/nvmet ]] 00:20:16.929 08:06:22 -- nvmf/common.sh@635 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:17.188 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:17.188 Waiting for block devices as requested 00:20:17.446 0000:00:06.0 (1b36 0010): uio_pci_generic -> nvme 00:20:17.446 0000:00:07.0 (1b36 0010): uio_pci_generic -> nvme 00:20:17.446 08:06:23 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:17.446 08:06:23 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme0n1 ]] 00:20:17.446 08:06:23 -- nvmf/common.sh@640 -- # block_in_use nvme0n1 00:20:17.446 08:06:23 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:20:17.446 08:06:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:20:17.446 No valid GPT data, bailing 00:20:17.446 08:06:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:20:17.446 08:06:23 -- scripts/common.sh@393 -- # pt= 00:20:17.446 08:06:23 -- scripts/common.sh@394 -- # return 1 00:20:17.446 08:06:23 -- nvmf/common.sh@640 -- # nvme=/dev/nvme0n1 00:20:17.446 08:06:23 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:17.446 08:06:23 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n1 ]] 00:20:17.446 08:06:23 -- nvmf/common.sh@640 -- # block_in_use nvme1n1 00:20:17.446 08:06:23 -- scripts/common.sh@380 -- # local block=nvme1n1 pt 00:20:17.446 08:06:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:20:17.704 No valid GPT data, bailing 00:20:17.704 08:06:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:20:17.704 08:06:23 -- scripts/common.sh@393 -- # pt= 00:20:17.704 08:06:23 -- scripts/common.sh@394 -- # return 1 00:20:17.704 08:06:23 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n1 00:20:17.704 08:06:23 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:17.704 08:06:23 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n2 ]] 00:20:17.704 08:06:23 -- nvmf/common.sh@640 -- # block_in_use nvme1n2 00:20:17.704 08:06:23 -- scripts/common.sh@380 -- # local block=nvme1n2 pt 00:20:17.704 08:06:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n2 00:20:17.704 No valid GPT data, bailing 00:20:17.704 08:06:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:20:17.704 08:06:23 -- scripts/common.sh@393 -- # pt= 00:20:17.704 08:06:23 -- scripts/common.sh@394 -- # return 1 00:20:17.704 08:06:23 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n2 00:20:17.704 08:06:23 -- nvmf/common.sh@638 -- # for block in /sys/block/nvme* 00:20:17.704 08:06:23 -- nvmf/common.sh@639 -- # [[ -e /sys/block/nvme1n3 ]] 00:20:17.704 08:06:23 -- nvmf/common.sh@640 -- # block_in_use nvme1n3 00:20:17.704 08:06:23 -- scripts/common.sh@380 -- # local block=nvme1n3 pt 00:20:17.704 08:06:23 -- scripts/common.sh@389 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n3 00:20:17.704 No valid GPT data, bailing 00:20:17.704 08:06:23 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:20:17.704 08:06:23 -- scripts/common.sh@393 -- # pt= 00:20:17.704 08:06:23 -- scripts/common.sh@394 -- # return 1 00:20:17.704 08:06:23 -- nvmf/common.sh@640 -- # nvme=/dev/nvme1n3 00:20:17.704 08:06:23 -- nvmf/common.sh@643 -- # [[ -b /dev/nvme1n3 ]] 00:20:17.704 08:06:23 -- nvmf/common.sh@645 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:20:17.704 08:06:23 -- nvmf/common.sh@646 -- # mkdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:17.704 08:06:23 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:20:17.704 08:06:23 -- nvmf/common.sh@652 -- # echo SPDK-kernel_target 00:20:17.704 08:06:23 -- nvmf/common.sh@654 -- # echo 1 00:20:17.704 08:06:23 -- nvmf/common.sh@655 -- # echo /dev/nvme1n3 00:20:17.704 08:06:23 -- nvmf/common.sh@656 -- # echo 1 00:20:17.705 08:06:23 -- nvmf/common.sh@662 -- # echo 10.0.0.1 00:20:17.705 08:06:23 -- nvmf/common.sh@663 -- # echo tcp 00:20:17.705 08:06:23 -- nvmf/common.sh@664 -- # echo 4420 00:20:17.705 08:06:23 -- nvmf/common.sh@665 -- # echo ipv4 00:20:17.705 08:06:23 -- nvmf/common.sh@668 -- # ln -s /sys/kernel/config/nvmet/subsystems/kernel_target /sys/kernel/config/nvmet/ports/1/subsystems/ 00:20:17.705 08:06:23 -- nvmf/common.sh@671 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:13d3a838-6067-4799-8998-c5cad9c1d570 --hostid=13d3a838-6067-4799-8998-c5cad9c1d570 -a 10.0.0.1 -t tcp -s 4420 00:20:17.705 00:20:17.705 Discovery Log Number of Records 2, Generation counter 2 00:20:17.705 =====Discovery Log Entry 0====== 00:20:17.705 trtype: tcp 00:20:17.705 adrfam: ipv4 00:20:17.705 subtype: current discovery subsystem 00:20:17.705 treq: not specified, sq flow control disable supported 00:20:17.705 portid: 1 00:20:17.705 trsvcid: 4420 00:20:17.705 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:20:17.705 traddr: 10.0.0.1 00:20:17.705 eflags: none 00:20:17.705 sectype: none 00:20:17.705 =====Discovery Log Entry 1====== 00:20:17.705 trtype: tcp 00:20:17.705 adrfam: ipv4 00:20:17.705 subtype: nvme subsystem 00:20:17.705 treq: not specified, sq flow control disable supported 00:20:17.705 portid: 1 00:20:17.705 trsvcid: 4420 00:20:17.705 subnqn: kernel_target 00:20:17.705 traddr: 10.0.0.1 00:20:17.705 eflags: none 00:20:17.705 sectype: none 00:20:17.705 08:06:23 -- target/abort_qd_sizes.sh@69 -- # rabort tcp IPv4 10.0.0.1 4420 kernel_target 00:20:17.705 08:06:23 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:20:17.705 08:06:23 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:20:17.705 08:06:23 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:20:17.705 08:06:23 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:20:17.705 08:06:23 -- target/abort_qd_sizes.sh@21 -- # local subnqn=kernel_target 00:20:17.705 08:06:23 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:20:17.705 08:06:23 -- target/abort_qd_sizes.sh@24 -- # local target r 00:20:17.705 08:06:23 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:20:17.705 08:06:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:17.963 08:06:23 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:20:17.963 08:06:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:17.963 08:06:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:20:17.963 08:06:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:17.963 08:06:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:20:17.963 08:06:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:17.963 08:06:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:20:17.963 08:06:23 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:20:17.963 08:06:23 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:17.963 08:06:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:17.963 08:06:23 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:21.248 Initializing NVMe Controllers 00:20:21.248 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:21.248 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:21.248 Initialization complete. Launching workers. 00:20:21.248 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 29531, failed: 0 00:20:21.248 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 29531, failed to submit 0 00:20:21.248 success 0, unsuccess 29531, failed 0 00:20:21.248 08:06:26 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:21.248 08:06:26 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:24.538 Initializing NVMe Controllers 00:20:24.538 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:24.538 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:24.538 Initialization complete. Launching workers. 00:20:24.538 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 64932, failed: 0 00:20:24.538 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 26996, failed to submit 37936 00:20:24.538 success 0, unsuccess 26996, failed 0 00:20:24.538 08:06:29 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:20:24.538 08:06:29 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:kernel_target' 00:20:27.820 Initializing NVMe Controllers 00:20:27.820 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: kernel_target 00:20:27.821 Associating TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 with lcore 0 00:20:27.821 Initialization complete. Launching workers. 00:20:27.821 NS: TCP (addr:10.0.0.1 subnqn:kernel_target) NSID 1 I/O completed: 75438, failed: 0 00:20:27.821 CTRLR: TCP (addr:10.0.0.1 subnqn:kernel_target) abort submitted 18850, failed to submit 56588 00:20:27.821 success 0, unsuccess 18850, failed 0 00:20:27.821 08:06:33 -- target/abort_qd_sizes.sh@70 -- # clean_kernel_target 00:20:27.821 08:06:33 -- nvmf/common.sh@675 -- # [[ -e /sys/kernel/config/nvmet/subsystems/kernel_target ]] 00:20:27.821 08:06:33 -- nvmf/common.sh@677 -- # echo 0 00:20:27.821 08:06:33 -- nvmf/common.sh@679 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/kernel_target 00:20:27.821 08:06:33 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target/namespaces/1 00:20:27.821 08:06:33 -- nvmf/common.sh@681 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:20:27.821 08:06:33 -- nvmf/common.sh@682 -- # rmdir /sys/kernel/config/nvmet/subsystems/kernel_target 00:20:27.821 08:06:33 -- nvmf/common.sh@684 -- # modules=(/sys/module/nvmet/holders/*) 00:20:27.821 08:06:33 -- nvmf/common.sh@686 -- # modprobe -r nvmet_tcp nvmet 00:20:27.821 ************************************ 00:20:27.821 END TEST kernel_target_abort 00:20:27.821 ************************************ 00:20:27.821 00:20:27.821 real 0m10.555s 00:20:27.821 user 0m5.501s 00:20:27.821 sys 0m2.458s 00:20:27.821 08:06:33 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:27.821 08:06:33 -- common/autotest_common.sh@10 -- # set +x 00:20:27.821 08:06:33 -- target/abort_qd_sizes.sh@86 -- # trap - SIGINT SIGTERM EXIT 00:20:27.821 08:06:33 -- target/abort_qd_sizes.sh@87 -- # nvmftestfini 00:20:27.821 08:06:33 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:27.821 08:06:33 -- nvmf/common.sh@116 -- # sync 00:20:27.821 08:06:33 -- nvmf/common.sh@118 -- # '[' tcp == tcp ']' 00:20:27.821 08:06:33 -- nvmf/common.sh@119 -- # set +e 00:20:27.821 08:06:33 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:27.821 08:06:33 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-tcp 00:20:27.821 rmmod nvme_tcp 00:20:27.821 rmmod nvme_fabrics 00:20:27.821 rmmod nvme_keyring 00:20:27.821 08:06:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:27.821 08:06:33 -- nvmf/common.sh@123 -- # set -e 00:20:27.821 08:06:33 -- nvmf/common.sh@124 -- # return 0 00:20:27.821 08:06:33 -- nvmf/common.sh@477 -- # '[' -n 82147 ']' 00:20:27.821 08:06:33 -- nvmf/common.sh@478 -- # killprocess 82147 00:20:27.821 Process with pid 82147 is not found 00:20:27.821 08:06:33 -- common/autotest_common.sh@926 -- # '[' -z 82147 ']' 00:20:27.821 08:06:33 -- common/autotest_common.sh@930 -- # kill -0 82147 00:20:27.821 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (82147) - No such process 00:20:27.821 08:06:33 -- common/autotest_common.sh@953 -- # echo 'Process with pid 82147 is not found' 00:20:27.821 08:06:33 -- nvmf/common.sh@480 -- # '[' iso == iso ']' 00:20:27.821 08:06:33 -- nvmf/common.sh@481 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:20:28.159 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:28.159 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:28.159 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:28.416 08:06:33 -- nvmf/common.sh@483 -- # [[ tcp == \t\c\p ]] 00:20:28.416 08:06:33 -- nvmf/common.sh@484 -- # nvmf_tcp_fini 00:20:28.416 08:06:33 -- nvmf/common.sh@273 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:28.416 08:06:33 -- nvmf/common.sh@277 -- # remove_spdk_ns 00:20:28.416 08:06:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.416 08:06:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:20:28.416 08:06:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:28.416 08:06:34 -- nvmf/common.sh@278 -- # ip -4 addr flush nvmf_init_if 00:20:28.416 00:20:28.416 real 0m24.366s 00:20:28.416 user 0m49.115s 00:20:28.416 sys 0m5.856s 00:20:28.416 ************************************ 00:20:28.416 END TEST nvmf_abort_qd_sizes 00:20:28.416 ************************************ 00:20:28.416 08:06:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:28.416 08:06:34 -- common/autotest_common.sh@10 -- # set +x 00:20:28.416 08:06:34 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:28.416 08:06:34 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:28.416 08:06:34 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:20:28.416 08:06:34 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:20:28.416 08:06:34 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:20:28.416 08:06:34 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:28.416 08:06:34 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:28.416 08:06:34 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:28.416 08:06:34 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:20:28.416 08:06:34 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:28.416 08:06:34 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:20:28.416 08:06:34 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:28.416 08:06:34 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:28.416 08:06:34 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:20:28.416 08:06:34 -- spdk/autotest.sh@378 -- # [[ 0 -eq 1 ]] 00:20:28.416 08:06:34 -- spdk/autotest.sh@383 -- # trap - SIGINT SIGTERM EXIT 00:20:28.416 08:06:34 -- spdk/autotest.sh@385 -- # timing_enter post_cleanup 00:20:28.416 08:06:34 -- common/autotest_common.sh@712 -- # xtrace_disable 00:20:28.416 08:06:34 -- common/autotest_common.sh@10 -- # set +x 00:20:28.416 08:06:34 -- spdk/autotest.sh@386 -- # autotest_cleanup 00:20:28.416 08:06:34 -- common/autotest_common.sh@1371 -- # local autotest_es=0 00:20:28.416 08:06:34 -- common/autotest_common.sh@1372 -- # xtrace_disable 00:20:28.416 08:06:34 -- common/autotest_common.sh@10 -- # set +x 00:20:30.318 INFO: APP EXITING 00:20:30.318 INFO: killing all VMs 00:20:30.318 INFO: killing vhost app 00:20:30.318 INFO: EXIT DONE 00:20:30.576 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:30.834 0000:00:06.0 (1b36 0010): Already using the nvme driver 00:20:30.834 0000:00:07.0 (1b36 0010): Already using the nvme driver 00:20:31.399 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:31.399 Cleaning 00:20:31.399 Removing: /var/run/dpdk/spdk0/config 00:20:31.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:31.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:31.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:31.399 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:31.399 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:31.399 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:31.399 Removing: /var/run/dpdk/spdk1/config 00:20:31.400 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:20:31.400 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:20:31.400 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:20:31.400 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:20:31.400 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:20:31.400 Removing: /var/run/dpdk/spdk1/hugepage_info 00:20:31.400 Removing: /var/run/dpdk/spdk2/config 00:20:31.400 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:20:31.400 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:20:31.400 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:20:31.400 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:20:31.400 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:20:31.400 Removing: /var/run/dpdk/spdk2/hugepage_info 00:20:31.400 Removing: /var/run/dpdk/spdk3/config 00:20:31.400 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:20:31.400 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:20:31.400 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:20:31.400 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:20:31.400 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:20:31.400 Removing: /var/run/dpdk/spdk3/hugepage_info 00:20:31.400 Removing: /var/run/dpdk/spdk4/config 00:20:31.400 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:20:31.400 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:20:31.400 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:20:31.400 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:20:31.657 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:20:31.657 Removing: /var/run/dpdk/spdk4/hugepage_info 00:20:31.657 Removing: /dev/shm/nvmf_trace.0 00:20:31.657 Removing: /dev/shm/spdk_tgt_trace.pid65449 00:20:31.657 Removing: /var/run/dpdk/spdk0 00:20:31.657 Removing: /var/run/dpdk/spdk1 00:20:31.657 Removing: /var/run/dpdk/spdk2 00:20:31.657 Removing: /var/run/dpdk/spdk3 00:20:31.657 Removing: /var/run/dpdk/spdk4 00:20:31.657 Removing: /var/run/dpdk/spdk_pid65310 00:20:31.657 Removing: /var/run/dpdk/spdk_pid65449 00:20:31.657 Removing: /var/run/dpdk/spdk_pid65686 00:20:31.657 Removing: /var/run/dpdk/spdk_pid65871 00:20:31.657 Removing: /var/run/dpdk/spdk_pid66005 00:20:31.657 Removing: /var/run/dpdk/spdk_pid66074 00:20:31.657 Removing: /var/run/dpdk/spdk_pid66138 00:20:31.657 Removing: /var/run/dpdk/spdk_pid66228 00:20:31.657 Removing: /var/run/dpdk/spdk_pid66293 00:20:31.657 Removing: /var/run/dpdk/spdk_pid66337 00:20:31.657 Removing: /var/run/dpdk/spdk_pid66367 00:20:31.657 Removing: /var/run/dpdk/spdk_pid66422 00:20:31.657 Removing: /var/run/dpdk/spdk_pid66514 00:20:31.657 Removing: /var/run/dpdk/spdk_pid66927 00:20:31.657 Removing: /var/run/dpdk/spdk_pid66974 00:20:31.657 Removing: /var/run/dpdk/spdk_pid67025 00:20:31.657 Removing: /var/run/dpdk/spdk_pid67041 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67097 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67113 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67173 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67185 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67236 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67254 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67294 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67312 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67428 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67458 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67526 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67583 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67602 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67661 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67669 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67692 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67700 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67728 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67738 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67761 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67774 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67797 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67805 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67828 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67841 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67864 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67872 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67900 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67908 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67931 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67941 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67969 00:20:31.658 Removing: /var/run/dpdk/spdk_pid67977 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68001 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68014 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68038 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68046 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68069 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68082 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68105 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68113 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68140 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68149 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68172 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68180 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68208 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68219 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68245 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68256 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68287 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68295 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68318 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68331 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68355 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68413 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68486 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68776 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68788 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68813 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68825 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68833 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68845 00:20:31.658 Removing: /var/run/dpdk/spdk_pid68852 00:20:31.916 Removing: /var/run/dpdk/spdk_pid68865 00:20:31.916 Removing: /var/run/dpdk/spdk_pid68877 00:20:31.916 Removing: /var/run/dpdk/spdk_pid68884 00:20:31.916 Removing: /var/run/dpdk/spdk_pid68897 00:20:31.916 Removing: /var/run/dpdk/spdk_pid68909 00:20:31.916 Removing: /var/run/dpdk/spdk_pid68916 00:20:31.916 Removing: /var/run/dpdk/spdk_pid68929 00:20:31.916 Removing: /var/run/dpdk/spdk_pid68936 00:20:31.916 Removing: /var/run/dpdk/spdk_pid68948 00:20:31.916 Removing: /var/run/dpdk/spdk_pid68956 00:20:31.916 Removing: /var/run/dpdk/spdk_pid68968 00:20:31.916 Removing: /var/run/dpdk/spdk_pid68980 00:20:31.916 Removing: /var/run/dpdk/spdk_pid68988 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69017 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69024 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69051 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69102 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69128 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69132 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69155 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69164 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69166 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69199 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69206 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69227 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69234 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69236 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69243 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69245 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69247 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69254 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69256 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69282 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69303 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69307 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69335 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69339 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69346 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69375 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69381 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69407 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69409 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69416 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69418 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69420 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69427 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69429 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69436 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69498 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69522 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69602 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69628 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69660 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69674 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69683 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69697 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69721 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69735 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69797 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69800 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69825 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69879 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69911 00:20:31.916 Removing: /var/run/dpdk/spdk_pid69933 00:20:31.916 Removing: /var/run/dpdk/spdk_pid70013 00:20:31.916 Removing: /var/run/dpdk/spdk_pid70052 00:20:31.916 Removing: /var/run/dpdk/spdk_pid70079 00:20:31.916 Removing: /var/run/dpdk/spdk_pid70283 00:20:31.916 Removing: /var/run/dpdk/spdk_pid70369 00:20:31.916 Removing: /var/run/dpdk/spdk_pid70397 00:20:31.916 Removing: /var/run/dpdk/spdk_pid70701 00:20:31.916 Removing: /var/run/dpdk/spdk_pid70727 00:20:31.916 Removing: /var/run/dpdk/spdk_pid70990 00:20:31.916 Removing: /var/run/dpdk/spdk_pid71311 00:20:31.916 Removing: /var/run/dpdk/spdk_pid71466 00:20:31.916 Removing: /var/run/dpdk/spdk_pid72067 00:20:31.916 Removing: /var/run/dpdk/spdk_pid72687 00:20:31.916 Removing: /var/run/dpdk/spdk_pid72743 00:20:31.916 Removing: /var/run/dpdk/spdk_pid72775 00:20:31.916 Removing: /var/run/dpdk/spdk_pid73825 00:20:31.916 Removing: /var/run/dpdk/spdk_pid74017 00:20:31.916 Removing: /var/run/dpdk/spdk_pid74278 00:20:31.916 Removing: /var/run/dpdk/spdk_pid74327 00:20:31.916 Removing: /var/run/dpdk/spdk_pid74393 00:20:31.916 Removing: /var/run/dpdk/spdk_pid74410 00:20:31.916 Removing: /var/run/dpdk/spdk_pid74426 00:20:31.916 Removing: /var/run/dpdk/spdk_pid74442 00:20:32.175 Removing: /var/run/dpdk/spdk_pid74515 00:20:32.175 Removing: /var/run/dpdk/spdk_pid74580 00:20:32.175 Removing: /var/run/dpdk/spdk_pid74688 00:20:32.175 Removing: /var/run/dpdk/spdk_pid74747 00:20:32.175 Removing: /var/run/dpdk/spdk_pid75054 00:20:32.175 Removing: /var/run/dpdk/spdk_pid75311 00:20:32.175 Removing: /var/run/dpdk/spdk_pid75313 00:20:32.175 Removing: /var/run/dpdk/spdk_pid76814 00:20:32.175 Removing: /var/run/dpdk/spdk_pid76816 00:20:32.175 Removing: /var/run/dpdk/spdk_pid77055 00:20:32.175 Removing: /var/run/dpdk/spdk_pid77063 00:20:32.175 Removing: /var/run/dpdk/spdk_pid77075 00:20:32.175 Removing: /var/run/dpdk/spdk_pid77089 00:20:32.175 Removing: /var/run/dpdk/spdk_pid77094 00:20:32.175 Removing: /var/run/dpdk/spdk_pid77146 00:20:32.175 Removing: /var/run/dpdk/spdk_pid77149 00:20:32.175 Removing: /var/run/dpdk/spdk_pid77197 00:20:32.175 Removing: /var/run/dpdk/spdk_pid77199 00:20:32.175 Removing: /var/run/dpdk/spdk_pid77247 00:20:32.175 Removing: /var/run/dpdk/spdk_pid77253 00:20:32.175 Removing: /var/run/dpdk/spdk_pid77556 00:20:32.175 Removing: /var/run/dpdk/spdk_pid77588 00:20:32.175 Removing: /var/run/dpdk/spdk_pid77668 00:20:32.175 Removing: /var/run/dpdk/spdk_pid77722 00:20:32.175 Removing: /var/run/dpdk/spdk_pid77980 00:20:32.175 Removing: /var/run/dpdk/spdk_pid78079 00:20:32.175 Removing: /var/run/dpdk/spdk_pid78376 00:20:32.175 Removing: /var/run/dpdk/spdk_pid78826 00:20:32.175 Removing: /var/run/dpdk/spdk_pid79171 00:20:32.175 Removing: /var/run/dpdk/spdk_pid79204 00:20:32.175 Removing: /var/run/dpdk/spdk_pid79233 00:20:32.175 Removing: /var/run/dpdk/spdk_pid79258 00:20:32.175 Removing: /var/run/dpdk/spdk_pid79334 00:20:32.175 Removing: /var/run/dpdk/spdk_pid79371 00:20:32.175 Removing: /var/run/dpdk/spdk_pid79402 00:20:32.175 Removing: /var/run/dpdk/spdk_pid79433 00:20:32.175 Removing: /var/run/dpdk/spdk_pid79708 00:20:32.175 Removing: /var/run/dpdk/spdk_pid80520 00:20:32.175 Removing: /var/run/dpdk/spdk_pid80599 00:20:32.175 Removing: /var/run/dpdk/spdk_pid80715 00:20:32.175 Removing: /var/run/dpdk/spdk_pid81185 00:20:32.175 Removing: /var/run/dpdk/spdk_pid81278 00:20:32.175 Removing: /var/run/dpdk/spdk_pid81376 00:20:32.175 Removing: /var/run/dpdk/spdk_pid81439 00:20:32.175 Removing: /var/run/dpdk/spdk_pid81552 00:20:32.175 Removing: /var/run/dpdk/spdk_pid81625 00:20:32.175 Removing: /var/run/dpdk/spdk_pid82192 00:20:32.175 Removing: /var/run/dpdk/spdk_pid82208 00:20:32.175 Removing: /var/run/dpdk/spdk_pid82221 00:20:32.175 Removing: /var/run/dpdk/spdk_pid82441 00:20:32.175 Removing: /var/run/dpdk/spdk_pid82458 00:20:32.175 Removing: /var/run/dpdk/spdk_pid82474 00:20:32.175 Clean 00:20:32.175 killing process with pid 59678 00:20:32.175 Process with pid 59682 is not found 00:20:32.175 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 930: kill: (59682) - No such process 00:20:32.175 08:06:37 -- common/autotest_common.sh@1436 -- # return 0 00:20:32.175 08:06:37 -- spdk/autotest.sh@387 -- # timing_exit post_cleanup 00:20:32.175 08:06:37 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:32.176 08:06:37 -- common/autotest_common.sh@10 -- # set +x 00:20:32.434 08:06:38 -- spdk/autotest.sh@389 -- # timing_exit autotest 00:20:32.434 08:06:38 -- common/autotest_common.sh@718 -- # xtrace_disable 00:20:32.434 08:06:38 -- common/autotest_common.sh@10 -- # set +x 00:20:32.434 08:06:38 -- spdk/autotest.sh@390 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:32.434 08:06:38 -- spdk/autotest.sh@392 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:32.434 08:06:38 -- spdk/autotest.sh@392 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:32.434 08:06:38 -- spdk/autotest.sh@394 -- # hash lcov 00:20:32.434 08:06:38 -- spdk/autotest.sh@394 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:20:32.434 08:06:38 -- spdk/autotest.sh@396 -- # hostname 00:20:32.434 08:06:38 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1716830599-074-updated-1705279005 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:32.692 geninfo: WARNING: invalid characters removed from testname! 00:20:59.235 08:07:02 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:01.768 08:07:06 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:04.304 08:07:09 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:07.591 08:07:12 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:10.123 08:07:15 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:12.653 08:07:18 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:15.969 08:07:21 -- spdk/autotest.sh@403 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:15.969 08:07:21 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:15.969 08:07:21 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:15.969 08:07:21 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:15.969 08:07:21 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:15.969 08:07:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.969 08:07:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.969 08:07:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.969 08:07:21 -- paths/export.sh@5 -- $ export PATH 00:21:15.969 08:07:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:15.969 08:07:21 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:15.969 08:07:21 -- common/autobuild_common.sh@435 -- $ date +%s 00:21:15.969 08:07:21 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1720858041.XXXXXX 00:21:15.969 08:07:21 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1720858041.bFVolJ 00:21:15.969 08:07:21 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:21:15.969 08:07:21 -- common/autobuild_common.sh@441 -- $ '[' -n v22.11.4 ']' 00:21:15.969 08:07:21 -- common/autobuild_common.sh@442 -- $ dirname /home/vagrant/spdk_repo/dpdk/build 00:21:15.969 08:07:21 -- common/autobuild_common.sh@442 -- $ scanbuild_exclude=' --exclude /home/vagrant/spdk_repo/dpdk' 00:21:15.969 08:07:21 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:15.970 08:07:21 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/dpdk --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:15.970 08:07:21 -- common/autobuild_common.sh@451 -- $ get_config_params 00:21:15.970 08:07:21 -- common/autotest_common.sh@387 -- $ xtrace_disable 00:21:15.970 08:07:21 -- common/autotest_common.sh@10 -- $ set +x 00:21:15.970 08:07:21 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-uring --with-dpdk=/home/vagrant/spdk_repo/dpdk/build' 00:21:15.970 08:07:21 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:21:15.970 08:07:21 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:21:15.970 08:07:21 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:21:15.970 08:07:21 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:21:15.970 08:07:21 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:21:15.970 08:07:21 -- spdk/autopackage.sh@19 -- $ timing_finish 00:21:15.970 08:07:21 -- common/autotest_common.sh@724 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:15.970 08:07:21 -- common/autotest_common.sh@725 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:21:15.970 08:07:21 -- common/autotest_common.sh@727 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:15.970 08:07:21 -- spdk/autopackage.sh@20 -- $ exit 0 00:21:15.970 + [[ -n 5868 ]] 00:21:15.970 + sudo kill 5868 00:21:15.980 [Pipeline] } 00:21:15.998 [Pipeline] // timeout 00:21:16.003 [Pipeline] } 00:21:16.021 [Pipeline] // stage 00:21:16.028 [Pipeline] } 00:21:16.051 [Pipeline] // catchError 00:21:16.098 [Pipeline] stage 00:21:16.102 [Pipeline] { (Stop VM) 00:21:16.113 [Pipeline] sh 00:21:16.387 + vagrant halt 00:21:20.573 ==> default: Halting domain... 00:21:27.146 [Pipeline] sh 00:21:27.424 + vagrant destroy -f 00:21:31.612 ==> default: Removing domain... 00:21:31.625 [Pipeline] sh 00:21:31.905 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:21:31.914 [Pipeline] } 00:21:31.932 [Pipeline] // stage 00:21:31.938 [Pipeline] } 00:21:31.956 [Pipeline] // dir 00:21:31.961 [Pipeline] } 00:21:31.979 [Pipeline] // wrap 00:21:31.986 [Pipeline] } 00:21:31.999 [Pipeline] // catchError 00:21:32.008 [Pipeline] stage 00:21:32.010 [Pipeline] { (Epilogue) 00:21:32.023 [Pipeline] sh 00:21:32.380 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:38.951 [Pipeline] catchError 00:21:38.953 [Pipeline] { 00:21:38.965 [Pipeline] sh 00:21:39.239 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:39.497 Artifacts sizes are good 00:21:39.505 [Pipeline] } 00:21:39.517 [Pipeline] // catchError 00:21:39.526 [Pipeline] archiveArtifacts 00:21:39.531 Archiving artifacts 00:21:39.699 [Pipeline] cleanWs 00:21:39.710 [WS-CLEANUP] Deleting project workspace... 00:21:39.710 [WS-CLEANUP] Deferred wipeout is used... 00:21:39.716 [WS-CLEANUP] done 00:21:39.717 [Pipeline] } 00:21:39.730 [Pipeline] // stage 00:21:39.735 [Pipeline] } 00:21:39.746 [Pipeline] // node 00:21:39.750 [Pipeline] End of Pipeline 00:21:39.782 Finished: SUCCESS